query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
22
negative_passages
listlengths
9
100
subset
stringclasses
7 values
305f440bdbf13e2791c5426ff4070efd
THE PSYCHOLOGY OF SELF ‐ DEFENSE : SELF ‐ AFFIRMATION THEORY
[ { "docid": "f5bc721d2b63912307c4ad04fb78dd2c", "text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even", "title": "" } ]
[ { "docid": "fe2b8921623f3bcf7b8789853b45e912", "text": "OBJECTIVE\nTo establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome.\n\n\nMETHOD\nWe studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation.\n\n\nRESULTS\nAt follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation.\n\n\nCONCLUSIONS\nMost children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.", "title": "" }, { "docid": "1274e55cc173f64fcc9a191d859c2e41", "text": "We present an O*(n3) randomized algorithm for estimating the volume of a well-rounded convex body given by a membership oracle, improving on the previous best complexity of O*(n4). The new algorithmic ingredient is an accelerated cooling schedule where the rate of cooling increases with the temperature. Previously, the known approach for potentially achieving such complexity relied on a positive resolution of the KLS hyperplane conjecture, a central open problem in convex geometry.", "title": "" }, { "docid": "af84229b7237e9f85f2273896a808b83", "text": "Distributed word representation is an efficient method for capturing semantic and syntactic word relations. In this work, we introduce an extension to the continuous bag-of-words model for learning word representations efficiently by using implicit structure information. Instead of relying on a syntactic parser which might be noisy and slow to build, we compute weights representing probabilities of syntactic relations based on the Huffman softmax tree in an efficient heuristic. The constructed “implicit graphs” from these weights show that these weights contain useful implicit structure information. Extensive experiments performed on several word similarity and word analogy tasks show gains compared to the basic continuous bag-of-words model.", "title": "" }, { "docid": "fc09e1c012016c75418ec33dfe5868d5", "text": "Big data is the word used to describe structured and unstructured data. The term big data is originated from the web search companies who had to query loosely structured very large", "title": "" }, { "docid": "2ab4619cd5f7ec48596ce63bd111a23b", "text": "Growing demand for ubiquitous and pervasive computing has triggered a sharp rise in handheld device usage. At the same time, dynamic multimedia data has become accepted as core material which many important applications depend on, despite intensive costs in computation and resources. This paper investigates the suitability and constraints of using handheld devices for such applications. We firstly analyse the capabilities and limitations of current models of handheld devices and advanced features offered by next generation models. We then categorise these applications and discuss the typical requirements of each class. Important issues to be considered include data organisation and management, communication, and input and user interfaces. Finally, we briefly discuss future outlook and identify remaining areas for research.", "title": "" }, { "docid": "a65930b1f31421bb4222933a36ac93c7", "text": "Personalized nutrition is fast becoming a reality due to a number of technological, scientific, and societal developments that complement and extend current public health nutrition recommendations. Personalized nutrition tailors dietary recommendations to specific biological requirements on the basis of a person's health status and goals. The biology underpinning these recommendations is complex, and thus any recommendations must account for multiple biological processes and subprocesses occurring in various tissues and must be formed with an appreciation for how these processes interact with dietary nutrients and environmental factors. Therefore, a systems biology-based approach that considers the most relevant interacting biological mechanisms is necessary to formulate the best recommendations to help people meet their wellness goals. Here, the concept of \"systems flexibility\" is introduced to personalized nutrition biology. Systems flexibility allows the real-time evaluation of metabolism and other processes that maintain homeostasis following an environmental challenge, thereby enabling the formulation of personalized recommendations. Examples in the area of macro- and micronutrients are reviewed. Genetic variations and performance goals are integrated into this systems approach to provide a strategy for a balanced evaluation and an introduction to personalized nutrition. Finally, modeling approaches that combine personalized diagnosis and nutritional intervention into practice are reviewed.", "title": "" }, { "docid": "a241291333a570b7ca09e6ae49467ebf", "text": "This article aims to contribute to understanding how to use the Balanced Scorecard (BSC) effectively. The BSC lends itself to various interpretations. This article explores how the way in which the BSC is used affects performance. Empirical evidence from Dutch firms suggests BSC use will not automatically improve company performance, but that the manner of its use matters: BSC use that complements corporate strategy positively influences company performance, while BSC use that is not related to the strategy may decrease it. We discuss the findings and offer managers guidance for optimal use of the BSC. Q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1592dc2c81d9d6b9c58cc1a5b530c923", "text": "We propose a cloudlet network architecture to bring the computing resources from the centralized cloud to the edge. Thus, each User Equipment (UE) can communicate with its Avatar, a software clone located in a cloudlet, and can thus lower the end-to-end (E2E) delay. However, UEs are moving over time, and so the low E2E delay may not be maintained if UEs' Avatars stay in their original cloudlets. Thus, live Avatar migration (i.e., migrating a UE's Avatar to a suitable cloudlet based on the UE's location) is enabled to maintain the low E2E delay between each UE and its Avatar. On the other hand, the migration itself incurs extra overheads in terms of resources of the Avatar, which compromise the performance of applications running in the Avatar. By considering the gain (i.e., the E2E delay reduction) and the cost (i.e., the migration overheads) of the live Avatar migration, we propose a PRofIt Maximization Avatar pLacement (PRIMAL) strategy for the cloudlet network in order to optimize the tradeoff between the migration gain and the migration cost by selectively migrating the Avatars to their optimal locations. Simulation results demonstrate that as compared to the other two strategies (i.e., Follow Me Avatar and Static), PRIMAL maximizes the profit in terms of maintaining the low average E2E delay between UEs and their Avatars and minimizing the migration cost simultaneously.", "title": "" }, { "docid": "1fcc1acdd4b7b170693af3d7da40f7f4", "text": "The intended purpose of this monograph is to provide a general overview of allergy diagnostics for health care professionals who care for patients with allergic disease. For a more comprehensive review of allergy diagnostic testing, readers can refer to the Allergy Diagnostic Practice Parameters. A key message is that a positive allergy test result (skin or blood) indicates only the presence of allergen specific IgE (called sensitization). It does not necessarily mean clinical allergy (ie, allergic symptoms with exposure). It is important for this reason that the allergy evaluation be based on the patient's history and directed by a health care professional with sufficient understanding of allergy diagnostic testing to use the information obtained from his/her evaluation of the patient to determine (1) what allergy diagnostic tests to order, (2) how to interpret the allergy diagnostic test results, and (3) how to use the information obtained from the allergy evaluation to develop an appropriate therapeutic treatment plan.", "title": "" }, { "docid": "847a64b0b5f2b8f3387c260bca8bb9c0", "text": "Pain-related emotions are a major barrier to effective self rehabilitation in chronic pain. Automated coaching systems capable of detecting these emotions are a potential solution. This paper lays the foundation for the development of such systems by making three contributions. First, through literature reviews, an overview of how pain is expressed in chronic pain and the motivation for detecting it in physical rehabilitation is provided. Second, a fully labelled multimodal dataset (named `EmoPain') containing high resolution multiple-view face videos, head mounted and room audio signals, full body 3D motion capture and electromyographic signals from back muscles is supplied. Natural unconstrained pain related facial expressions and body movement behaviours were elicited from people with chronic pain carrying out physical exercises. Both instructed and non-instructed exercises were considered to reflect traditional scenarios of physiotherapist directed therapy and home-based self-directed therapy. Two sets of labels were assigned: level of pain from facial expressions annotated by eight raters and the occurrence of six pain-related body behaviours segmented by four experts. Third, through exploratory experiments grounded in the data, the factors and challenges in the automated recognition of such expressions and behaviour are described, the paper concludes by discussing potential avenues in the context of these findings also highlighting differences for the two exercise scenarios addressed.", "title": "" }, { "docid": "1f1fd7217ed5bae04f9ac6f8ccc8c23f", "text": "Relating the brain's structural connectivity (SC) to its functional connectivity (FC) is a fundamental goal in neuroscience because it is capable of aiding our understanding of how the relatively fixed SC architecture underlies human cognition and diverse behaviors. With the aid of current noninvasive imaging technologies (e.g., structural MRI, diffusion MRI, and functional MRI) and graph theory methods, researchers have modeled the human brain as a complex network of interacting neuronal elements and characterized the underlying structural and functional connectivity patterns that support diverse cognitive functions. Specifically, research has demonstrated a tight SC-FC coupling, not only in interregional connectivity strength but also in network topologic organizations, such as community, rich-club, and motifs. Moreover, this SC-FC coupling exhibits significant changes in normal development and neuropsychiatric disorders, such as schizophrenia and epilepsy. This review summarizes recent progress regarding the SC-FC relationship of the human brain and emphasizes the important role of large-scale brain networks in the understanding of structural-functional associations. Future research directions related to this topic are also proposed.", "title": "" }, { "docid": "a8695230b065ae2e4c5308dfe4f8c10e", "text": "The paper describes a solution for the Yandex Personalized Web Search Challenge. The goal of the challenge is to rerank top ten web search query results to bring most personally relevant results on the top, thereby improving the search quality. The paper focuses on feature engineering for learning to rank in web search, including a novel pair-wise feature, shortand long-term personal navigation features. The paper demonstrates that point-wise logistic regression can achieve the stat-of-the-art performance in terms of normalized discounted cumulative gain with capability to scale up.", "title": "" }, { "docid": "59a7ed26693b41d6b07f843d0cf149cb", "text": "Now a day’s business is growing at a very rapid pace and a lot of information is generated. The more information we have, based on internal experiences or from external sources, the better our decisions would be. Business executives are faced with the same dilemmas when they make decisions. They need the best tools available to help them. Decision support system helps the managers to take better and quick decision by using historical and current data. By combining massive amounts of data with sophisticated analytical models and tools, and by making the system easy to use, they provide a much better source of information to use in the decision-making process. Health care is also one of the domains which get a lot of benefits and researches with the advent and progress in data mining. Data mining in medicine can resolve this problem and can provide promising results. It plays a vital role in extracting useful knowledge and making scientific decision for diagnosis and treatment of disease. Treatment records of millions of patients have been recorded and many tools and algorithms are applied to understand and analyze the data. Heart failure is a common disease which is difficult to diagnose. To aid physicians in diagnosing heart failure, a decision support system has been proposed. A classification based methods in health care is used to diagnose based on certain parameters to diagnosis if the patient have certain disease or not. The purpose is to explore the aspects of Clinical Decision Support Systems and to figure out the most optimal methodology that can be used in Clinical Decision Support Systems to provide the best solutions and diagnosis to medical problems.", "title": "" }, { "docid": "af3cc5fc9cf58048f9805923b45305d6", "text": "Spell checkers are one of the most widely recognized and heavily employed features of word processing applications in existence today. This remains true despite the many problems inherent in the spell checking methods employed by all modern spell checkers. In this paper we present a proof-ofconcept spell checking system that is able to intrinsically avoid many of these problems. In particular, it is the actual corrections performed by the typist that provides the basis for error detection. These corrections are used to train a feed-forward neural network so that if the same error is remade, the network can flag the offending word as a possible error. Since these corrections are the observations of a single typist’s behavior, a spell checker employing this system is essentially specific to the typist that made the corrections. A discussion of the benefits and deficits of the system is presented with the conclusion that the system is most effective as a supplement to current spell checking methods.", "title": "" }, { "docid": "8539b0107b37cb97b11804b0adafeae3", "text": "The exchange of independent information between two nodes in a wireless network can be viewed as two unicast sessions, corresponding to information transfer along one direction and the opposite direction. In this paper we show such information exchange can be efficiently performed by exploiting network coding and the physical-layer broadcast property offered by the wireless medium, which improves upon conventional solutions that separate the processing of the two unicast sessions. We propose a distributed scheme that obviates the need for synchronization and is robust to random packet loss and delay, and so on. The scheme is simple and incurs minor overhead.", "title": "" }, { "docid": "5caedb986844afcd40b5deb9ca8ba116", "text": "We present here because it will be so easy for you to access the internet service. As in this new era, much technology is sophistically offered by connecting to the internet. No any problems to face, just for this day, you can really keep in mind that the book is the best book for you. We offer the best here to read. After deciding how your feeling will be, you can enjoy to visit the link and get the book.", "title": "" }, { "docid": "4213993be9e2cf6d3470c59db20ea091", "text": "The virtual instrument is the main content of instrument technology nowadays. This article details the implementation process of the virtual oscilloscope. It is designed by LabVIEW graphical programming language. The virtual oscilloscope can achieve waveform display, channel selection, data collection, data reading, writing and storage, spectrum analysis, printing and waveform parameters measurement. It also has a friendly user interface and can be operated conveniently.", "title": "" }, { "docid": "3be5e04dab978b55064f0621839b4003", "text": "These lecture notes introduce some basic concepts from Shannon’s information theory, such as (conditional) Shannon entropy, mutual information, and Rényi entropy, as well as a number of basic results involving these notions. Subsequently, well-known bounds on perfectly secure encryption, source coding (i.e. data compression), and reliable communication over unreliable channels are discussed. We also cover and prove the elegant privacy amplification theorem. This provides a means to mod out the adversary’s partial information and to distill a highly secret key. It is a key result in theoretical cryptography, and a primary starting point for the very active subarea of unconditional security.", "title": "" }, { "docid": "70bed43cdfd50586e803bf1a9c8b3c0a", "text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.", "title": "" }, { "docid": "36a494bbe8d93d664fa2e30761ff79c4", "text": "Text clustering is used on a variety of applications such as content-based recommendation, categorization, summarization, information retrieval and automatic topic extraction. Since most pair of documents usually shares just a small percentage of words, the dataset representation tends to become very sparse, thus the need of using a similarity metric capable of a partial matching of a set of features. The technique known as Co-Clustering is capable of finding several clusters inside a dataset with each cluster composed of just a subset of the object and feature sets. In word-document data this can be useful to identify the clusters of documents pertaining to the same topic, even though they share just a small fraction of words. In this paper a scalable co-clustering algorithm is proposed using the Locality-sensitive hashing technique in order to find co-clusters of documents. The proposed algorithm will be tested against other co-clustering and traditional algorithms in well known datasets. The results show that this algorithm is capable of finding clusters more accurately than other approaches while maintaining a linear complexity.", "title": "" } ]
scidocsrr
9bb1fe552224c288f2c65f1fbb16ced5
Predicting user interests from contextual information
[ { "docid": "eaf30f31b332869bc45ff1288c41da71", "text": "Search Engines: Information Retrieval In Practice is writen by Bruce Croft in English language. Release on 2009-02-16, this book has 552 page count that consist of helpful information with easy reading experience. The book was publish by Addison-Wesley, it is one of best subjects book genre that gave you everything love about reading. You can find Search Engines: Information Retrieval In Practice book with ISBN 0136072240.", "title": "" }, { "docid": "3dab0441ca1e4fb39296be8006611690", "text": "A content-based personalized recommendation system learns user specific profiles from user feedback so that it can deliver information tailored to each individual user's interest. A system serving millions of users can learn a better user profile for a new user, or a user with little feedback, by borrowing information from other users through the use of a Bayesian hierarchical model. Learning the model parameters to optimize the joint data likelihood from millions of users is very computationally expensive. The commonly used EM algorithm converges very slowly due to the sparseness of the data in IR applications. This paper proposes a new fast learning technique to learn a large number of individual user profiles. The efficacy and efficiency of the proposed algorithm are justified by theory and demonstrated on actual user data from Netflix and MovieLens.", "title": "" } ]
[ { "docid": "0d18f41db76330c5d9cdceb268ca3434", "text": "A Low-power convolutional neural network (CNN)-based face recognition system is proposed for the user authentication in smart devices. The system consists of two chips: an always-on CMOS image sensor (CIS)-based face detector (FD) and a low-power CNN processor. For always-on FD, analog–digital Hybrid Haar-like FD is proposed to improve the energy efficiency of FD by 39%. For low-power CNN processing, the CNN processor with 1024 MAC units and 8192-bit-wide local distributed memory operates at near threshold voltage, 0.46 V with 5-MHz clock frequency. In addition, the separable filter approximation is adopted for the workload reduction of CNN, and transpose-read SRAM using 7T SRAM cell is proposed to reduce the activity factor of the data read operation. Implemented in 65-nm CMOS technology, the <inline-formula> <tex-math notation=\"LaTeX\">$3.30 \\times 3.36$ </tex-math></inline-formula> mm<sup>2</sup> CIS chip and the <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> mm<sup>2</sup> CNN processor consume 0.62 mW to evaluate one face at 1 fps and achieved 97% accuracy in LFW dataset.", "title": "" }, { "docid": "ca307225e8ab0e7876446cf17d659fc8", "text": "This paper presents a novel class of substrate integrated waveguide (SIW) filters, based on periodic perforations of the dielectric layer. The perforations allow to reduce the local effective dielectric permittivity, thus creating waveguide sections below cutoff. This effect is exploited to implement immittance inverters through analytical formulas, providing simple design rules for the direct synthesis of the filters. The proposed solution is demonstrated through the design and testing of several filters with different topologies (including half-mode SIW and folded structures). The comparison with classical iris-type SIW filters demonstrates that the proposed filters exhibit better performance in terms of sensitivity to fabrication inaccuracies and rejection bandwidth, at the cost of a slightly larger size.", "title": "" }, { "docid": "e4c879dffd5be1111573c7951ce9f8cd", "text": "Many algorithms have been implemented to the problem of Automatic Text Categorization (ATC). Most of the work in this area has been carried out on English texts, with only a few researchers addressing Arabic texts. We have investigated the use of the K-Nearest Neighbour (K-NN) classifier, with an Inew, cosine, jaccard and dice similarities, in order to enhance Arabic ATC. We represent the dataset as un-stemmed and stemmed data; with the use of TREC-2002, in order to remove prefixes and suffixes. However, for statistical text representation, Bag-Of-Words (BOW) and character-level 3 (3-Gram) were used. In order to, reduce the dimensionality of feature space; we used several feature selection methods. Experiments conducted with Arabic text showed that the K-NN classifier, with the new method similarity Inew 92.6% Macro-F1, had better performance than the K-NN classifier with cosine, jaccard and dice similarities. Chi-square feature selection, with representation by BOW, led to the best performance over other feature selection methods using BOW and 3-Gram.", "title": "" }, { "docid": "640832a57ea45ff0915e78370fe9353f", "text": "This paper proposes a neural sequence-to-sequence text-to-speech (TTS) model which can control latent attributes in the generated speech that are rarely annotated in the training data, such as speaking style, accent, background noise, and recording conditions. The model is formulated as a conditional generative model based on the variational autoencoder (VAE) framework, with two levels of hierarchical latent variables. The first level is a categorical variable, which represents attribute groups (e.g. clean/noisy) and provides interpretability. The second level, conditioned on the first, is a multivariate Gaussian variable, which characterizes specific attribute configurations (e.g. noise level, speaking rate) and enables disentangled fine-grained control over these attributes. This amounts to using a Gaussian mixture model (GMM) for the latent distribution. Extensive evaluation demonstrates its ability to control the aforementioned attributes. In particular, we train a high-quality controllable TTS model on real found data, which is capable of inferring speaker and style attributes from a noisy utterance and use it to synthesize clean speech with controllable speaking style.", "title": "" }, { "docid": "f5464935cd89c1042652c4f0994550e8", "text": "Convolutional neural networks (CNNs) have achieved great success on grid-like data such as images, but face tremendous challenges in learning from more generic data such as graphs. In CNNs, the trainable local filters enable the automatic extraction of high-level features. The computation with filters requires a fixed number of ordered units in the receptive fields. However, the number of neighboring units is neither fixed nor are they ordered in generic graphs, thereby hindering the applications of convolutional operations. Here, we address these challenges by proposing the learnable graph convolutional layer (LGCL). LGCL automatically selects a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolutional operations on generic graphs. To enable model training on large-scale graphs, we propose a sub-graph training method to reduce the excessive memory and computational resource requirements suffered by prior methods on graph convolutions. Our experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that our methods can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network datasets. Our results also indicate that the proposed methods using sub-graph training strategy are more efficient as compared to prior approaches.", "title": "" }, { "docid": "9d3cb5ae51c25bb059a7503d1212e6a5", "text": "People are generally unaware of the operation of the system of cognitive mechanisms that ameliorate their experience of negative affect (the psychological immune system), and thus they tend to overestimate the duration of their affective reactions to negative events. This tendency was demonstrated in 6 studies in which participants overestimated the duration of their affective reactions to the dissolution of a romantic relationship, the failure to achieve tenure, an electoral defeat, negative personality feedback, an account of a child's death, and rejection by a prospective employer. Participants failed to distinguish between situations in which their psychological immune systems would and would not be likely to operate and mistakenly predicted overly and equally enduring affective reactions in both instances. The present experiments suggest that people neglect the psychological immune system when making affective forecasts.", "title": "" }, { "docid": "82ea33c6fa6b36d3f641a3949d4d8d72", "text": "We study detection of cyberbullying in photosharing networks, with an eye on developing earlywarning mechanisms for the prediction of posted images vulnerable to attacks. Given the overwhelming increase in media accompanying text in online social networks, we investigate use of posted images and captions for improved detection of bullying in response to shared content. We validate our approaches on a dataset of over 3000 images along with peer-generated comments posted on the Instagram photo-sharing network, running comprehensive experiments using a variety of classifiers and feature sets. In addition to standard image and text features, we leverage several novel features including topics determined from image captions and a pretrained convolutional neural network on image pixels. We identify the importance of these advanced features in assisting detection of cyberbullying in posted comments. We also provide results on classification of images and captions themselves as potential targets for cyberbullies.", "title": "" }, { "docid": "945ef1e5215666f6eb13eccaf24e8a56", "text": "DNA mismatch repair (MMR) proteins are ubiquitous players in a diverse array of important cellular functions. In its role in post-replication repair, MMR safeguards the genome correcting base mispairs arising as a result of replication errors. Loss of MMR results in greatly increased rates of spontaneous mutation in organisms ranging from bacteria to humans. Mutations in MMR genes cause hereditary nonpolyposis colorectal cancer, and loss of MMR is associated with a significant fraction of sporadic cancers. Given its prominence in mutation avoidance and its ability to target a range of DNA lesions, MMR has been under investigation in studies of ageing mechanisms. This review summarizes what is known about the molecular details of the MMR pathway and the role of MMR proteins in cancer susceptibility and ageing.", "title": "" }, { "docid": "cfd0cadbdf58ee01095aea668f0da4fe", "text": "A unique and miniaturized dual-band coplanar waveguide (CPW)-fed antenna is presented. The proposed antenna comprises a rectangular patch that is surrounded by upper and lower ground-plane sections that are interconnected by a high-impedance microstrip line. The proposed antenna structure generates two separate impedance bandwidths to cover frequency bands of GSM and Wi-Fi/WLAN. The antenna realized is relatively small in size $(17\\times 20\\ {\\hbox{mm}}^{2})$ and operates over frequency ranges 1.60–1.85 and 4.95–5.80 GHz, making it suitable for GSM and Wi-Fi/WLAN applications. In addition, the antenna is circularly polarized in the GSM band. Experimental results show the antenna exhibits monopole-like radiation characteristics and a good antenna gain over its operating bands. The measured and simulated results presented show good agreement.", "title": "" }, { "docid": "fc06673e86c237e06d9e927e2f6468a8", "text": "Locality sensitive hashing (LSH) is a computationally efficient alternative to the distance based anomaly detection. The main advantages of LSH lie in constant detection time, low memory requirement, and simple implementation. However, since the metric of distance in LSHs does not consider the property of normal training data, a naive use of existing LSHs would not perform well. In this paper, we propose a new hashing scheme so that hash functions are selected dependently on the properties of the normal training data for reliable anomaly detection. The distance metric of the proposed method, called NSH (Normality Sensitive Hashing) is theoretically interpreted in terms of the region of normal training data and its effectiveness is demonstrated through experiments on real-world data. Our results are favorably comparable to state-of-the arts with the low-level features.", "title": "" }, { "docid": "f6a08c6659fcb7e6e56c0d004295c809", "text": "Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.", "title": "" }, { "docid": "fa8d2547c3f2524596e97681b846b0e6", "text": "Native Language Identification (NLI) is a task aimed at determining the native language (L1) of learners of second language (L2) on the basis of their written texts. To date, research on NLI has focused on relatively small corpora. We apply NLI to the recently released EFCamDat corpus which is not only multiple times larger than previous L2 corpora but also provides longitudinal data at several proficiency levels. Our investigation using accurate machine learning with a wide range of linguistic features reveals interesting patterns in the longitudinal data which are useful for both further development of NLI and its application to research on L2 acquisition.", "title": "" }, { "docid": "f0f0a24e6206bf821669d8707d7ce8de", "text": "Spend your few moment to read a book even only few pages. Reading book is not obligation and force for everybody. When you don't want to read, you can get punishment from the publisher. Read a book becomes a choice of your different characteristics. Many people with reading habit will always be enjoyable to read, or on the contrary. For some reasons, this graph based knowledge representation computational foundations of conceptual graphs advanced information and knowledge processing tends to be the representative book in this website.", "title": "" }, { "docid": "6b0860d2547ab7c4498cfb94a0ca7df4", "text": "Photo sharing is one of the most popular Web services. Photo sharing sites provide functions to add tags and geo-tags to photos to make photo organization easy. Considering that people take photos to record something that attracts them, geo-tagged photos are a rich data source that reflects people's memorable events associated with locations. In this paper, we focus on geo-tagged photos and propose a method to detect people's frequent trip patterns, i.e., typical sequences of visited cities and durations of stay as well as descriptive tags that characterize the trip patterns. Our method first segments photo collections into trips and categorizes them based on their trip themes, such as visiting landmarks or communing with nature. Our method mines frequent trip patterns for each trip theme category. We crawled 5.7 million geo-tagged photos and performed photo trip pattern mining. The experimental result shows that our method outperforms other baseline methods and can correctly segment photo collections into photo trips with an accuracy of 78%. For trip categorization, our method can categorize about 80% of trips using tags and titles of photos and visited cities as features. Finally, we illustrate interesting examples of trip patterns detected from our dataset and show an application with which users can search frequent trip patterns by querying a destination, visit duration, and trip theme on the trip.", "title": "" }, { "docid": "5aedc933eaeef54893626359b89861fc", "text": "A photovoltaic cell converts the solar energy into the electrical energy by the photovoltaic effect. Solar cells are widely used in terrestrial and space applications. The photovoltaic cells must be operated at their maximum power point. The maximum power point varies with illumination, temperature, radiation dose and other ageing effects. In this paper, mathematical modeling, V-I and P-V characteristics of PV cells are studied. Different modeling techniques like empirical model and ANFIS model are proposed and developed. The result obtained by empirical models will be compared with ANFIS model and it will be proved that ANFIS gives better result. The simulated V-I and P-V characteristics of Photovoltaic cell for various temperature and irradiance are presented. Also ANFIS model outputs are presented. This can be used for sizing of PV system.", "title": "" }, { "docid": "6024570104e8791e7f916a6e0479819c", "text": "This paper presents a new database suitable for both 2-D and 3-D face recognition based on photometric stereo (PS): the Photoface database. The database was collected using a custom-made four-source PS device designed to enable data capture with minimal interaction necessary from the subjects. The device, which automatically detects the presence of a subject using ultrasound, was placed at the entrance to a busy workplace and captured 1839 sessions of face images with natural pose and expression. This meant that the acquired data is more realistic for everyday use than existing databases and is, therefore, an invaluable test bed for state-of-the-art recognition algorithms. The paper also presents experiments of various face recognition and verification algorithms using the albedo, surface normals, and recovered depth maps. Finally, we have conducted experiments in order to demonstrate how different methods in the pipeline of PS (i.e., normal field computation and depth map reconstruction) affect recognition and verification performance. These experiments help to 1) demonstrate the usefulness of PS, and our device in particular, for minimal-interaction face recognition, and 2) highlight the optimal reconstruction and recognition algorithms for use with natural-expression PS data. The database can be downloaded from http://www.uwe.ac.uk/research/Photoface.", "title": "" }, { "docid": "fcebb315a2ccf956141389f666612b28", "text": "The growing popularity of social media (e.g., Twitter) allows users to easily share information with each other and influence others by expressing their own sentiments on various subjects. In this work, we propose an unsupervised tri-clustering framework, which analyzes both user-level and tweet-level sentiments through co-clustering of a tripartite graph. A compelling feature of the proposed framework is that the quality of sentiment clustering of tweets, users, and features can be mutually improved by joint clustering. We further investigate the evolution of user-level sentiments and latent feature vectors in an online framework and devise an efficient online algorithm to sequentially update the clustering of tweets, users and features with newly arrived data. The online framework not only provides better quality of both dynamic user-level and tweet-level sentiment analysis, but also improves the computational and storage efficiency. We verified the effectiveness and efficiency of the proposed approaches on the November 2012 California ballot Twitter data.", "title": "" }, { "docid": "529d175e035ed9a2b19fae31ec7ca9e4", "text": "Currently, deep neural networks are deployed on low-power portable devices by first training a full-precision model using powerful hardware, and then deriving a corresponding lowprecision model for efficient inference on such systems. However, training models directly with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have mostly been empirical. In this work, we investigate training methods for quantized neural networks from a theoretical viewpoint. We first explore accuracy guarantees for training methods under convexity assumptions. We then look at the behavior of these algorithms for non-convex problems, and show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.", "title": "" }, { "docid": "9ed378ac0420b3ec29cd830355e65ee7", "text": "Drawing on the Theory of Planned Behavior (TPB), this research investigates two factors that drive an employee to comply with requirements of the information security policy (ISP) of her organization with regards to protecting information and technology resources: an employee’s information security awareness (ISA) and her perceived fairness of the requirements of the ISP. Our results, which is based on the PLS analysis of data collected from 464 participants, show that ISA and perceived fairness positively affect attitude, and in turn attitude positively affects intention to comply. ISA also has an indirect impact on attitude since it positively influences perceived fairness. As organizations strive to get their employees to follow their information security rules and regulations, our study sheds light on the role of an employee’s ISA and procedural fairness with regards to security rules and regulations in the workplace.", "title": "" }, { "docid": "a3cd3ec70b5d794173db36cb9a219403", "text": "We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot’s fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher. Introduction We consider the problem of grasping novel objects, in the presence of significant amounts of clutter. A key challenge in this setting is that a full 3-d model of the scene is typically not available. Instead, a robot’s depth sensors can usually estimate only the shape of the visible portions of the scene. In this paper, we propose an algorithm that, given such partial models of the scene, selects a grasp—that is, a configuration of the robot’s arm and fingers—to try to pick up an object. If a full 3-d model (including the occluded portions of a scene) were available, then methods such as formand forceclosure (Mason and Salisbury 1985; Bicchi and Kumar 2000; Pollard 2004) and other grasp quality metrics (Pelossof et al. 2004; Hsiao, Kaelbling, and Lozano-Perez 2007; Ciocarlie, Goldfeder, and Allen 2007) can be used to try to find a good grasp. However, given only the point cloud returned by stereo vision or other depth sensors, a straightforward application of these ideas is impossible, since we do not have a model of the occluded portions of the scene. Copyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Image of an environment (left) and the 3-d pointcloud (right) returned by the Swissranger depth sensor. In detail, we will consider a robot that uses a camera, together with a depth sensor, to perceive a scene. The depth sensor returns a “point cloud,” corresponding to 3-d locations that it has found on the front unoccluded surfaces of the objects. (See Fig. 1.) Such point clouds are typically noisy (because of small errors in the depth estimates); but more importantly, they are also incomplete. 1 This work builds on Saxena et al. (2006a; 2006b; 2007; 2008) which applied supervised learning to identify visual properties that indicate good grasps, given a 2-d image of the scene. However, their algorithm only chose a 3-d “grasp point”—that is, the 3-d position (and 3-d orientation; Saxena et al. 2007) of the center of the end-effector. Thus, it did not generalize well to more complex arms and hands, such as to multi-fingered hands where one has to not only choose the 3d position (and orientation) of the hand, but also address the high dof problem of choosing the positions of all the fingers. Our approach begins by computing a number of features of grasp quality, using both 2-d image and the 3-d point cloud features. For example, the 3-d data is used to compute a number of grasp quality metrics, such as the degree to which the fingers are exerting forces normal to the surfaces of the object, and the degree to which they enclose the object. Using such features, we then apply a supervised learning algorithm to estimate the degree to which different configurations of the full arm and fingers reflect good grasps. We test our algorithm on two robots, on a variety of objects of shapes very different from ones in the training set, including a ski boot, a coil of wire, a game controller, and Forexample, standard stereo vision fails to return depth values for textureless portions of the object, thus its point clouds are typically very sparse. Further, the Swissranger gives few points only because of its low spatial resolution of 144 × 176. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)", "title": "" } ]
scidocsrr
5a4ffc2c0ec230c9358642defad4f9f5
GGP with Advanced Reasoning and Board Knowledge Discovery
[ { "docid": "51eb012d58f9bc8d68128b0da1347eba", "text": "This dissertation explores the problem of constructing an effective general game-playing program, with an emphasis on techniques for automatically constructing effective heuristic evaluation functions from game descriptions. A technique based on abstract models of games is presented. The abstract model treats mobility, payoff and termination as the most salient elements of a game. Each of these aspects are quantified in terms of stable features. Evidence is presented that the technique produces heuristic evaluation functions that are both comprehensible and effective. Empirical work includes a series of general game-playing programs that placed first or second for the three consecutive years of the AAAI General Game-Playing Competition. The full dissertation can be downloaded at https://sites.google.com/site/jimcluneresearch/ .", "title": "" } ]
[ { "docid": "2827b6387ef2cc6e668b69f44364f00b", "text": "BITCOIN is a novel decentralized cryptocurrency system which has recently received a great attention from a wider audience. An interesting and unique feature of this system is that the complete list of all the transactions occurred from its inception is publicly available. This enables the investigation of funds movements to uncover interesting properties of the BITCOIN economy. In this paper we present a set of analyses of the user graph, i.e. the graph obtained by an heuristic clustering of the graph of BITCOIN transactions. Our analyses consider an up-to-date BITCOIN blockchain, as in December 2015, after the exponential explosion of the number of transactions occurred in the last two years. The set of analyses we defined includes, among others, the analysis of the time evolution of BITCOIN network, the verification of the \"rich get richer\" conjecture and the detection of the nodes which are critical for the network connectivity.", "title": "" }, { "docid": "cd0ad1783e0ef64300cd59bb2fab27d1", "text": "Game Theory (GT) has been used with excellent results to model and optimize the operation of a huge number of real-world systems, including in communications and networking. Using a tutorial style, this paper surveys and updates the literature contributions that have applied a diverse set of theoretical games to solve a variety of challenging problems, namely in wireless data communication networks. During our literature discussion, the games are initially divided into three groups: classical, evolutionary, and incomplete information. Then, the classical games are further divided into three subgroups: non-cooperative, repeated, and cooperative. This paper reviews applications of games to develop adaptive algorithms and protocols for the efficient operation of some standardized uses cases at the edge of emerging heterogeneous networks. Finally, we highlight the important challenges, open issues, and future research directions where GT can bring beneficial outcomes to emerging wireless data networking applications.", "title": "" }, { "docid": "bab606f99e64c7fd5ce3c04376fbd632", "text": "Diagnostic reasoning is a key component of many professions. To improve students’ diagnostic reasoning skills, educational psychologists analyse and give feedback on epistemic activities used by these students while diagnosing, in particular, hypothesis generation, evidence generation, evidence evaluation, and drawing conclusions. However, this manual analysis is highly time-consuming. We aim to enable the large-scale adoption of diagnostic reasoning analysis and feedback by automating the epistemic activity identification. We create the first corpus for this task, comprising diagnostic reasoning selfexplanations of students from two domains annotated with epistemic activities. Based on insights from the corpus creation and the task’s characteristics, we discuss three challenges for the automatic identification of epistemic activities using AI methods: the correct identification of epistemic activity spans, the reliable distinction of similar epistemic activities, and the detection of overlapping epistemic activities. We propose a separate performance metric for each challenge and thus provide an evaluation framework for future research. Indeed, our evaluation of various state-of-the-art recurrent neural network architectures reveals that current techniques fail to address some of these challenges.", "title": "" }, { "docid": "1a78e17056cca09250c7cc5f81fb271b", "text": "This paper presents a lightweight stereo vision-based driving lane detection and classification system to achieve the ego-car’s lateral positioning and forward collision warning to aid advanced driver assistance systems (ADAS). For lane detection, we design a self-adaptive traffic lanes model in Hough Space with a maximum likelihood angle and dynamic pole detection region of interests (ROIs), which is robust to road bumpiness, lane structure changing while the ego-car’s driving and interferential markings on the ground. What’s more, this model can be improved with geographic information system or electronic map to achieve more accurate results. Besides, the 3-D information acquired by stereo matching is used to generate an obstacle mask to reduce irrelevant objects’ interfere and detect forward collision distance. For lane classification, a convolutional neural network is trained by using manually labeled ROI from KITTI data set to classify the left/right-side line of host lane so that we can provide significant information for lane changing strategy making in ADAS. Quantitative experimental evaluation shows good true positive rate on lane detection and classification with a real-time (15Hz) working speed. Experimental results also demonstrate a certain level of system robustness on variation of the environment.", "title": "" }, { "docid": "64a98c3bc9aebfc470ad689b66b6d86b", "text": "In his famous thought experiments on synthetic vehicles, Valentino Braitenberg stipulated that simple stimulus-response reactions in an organism could evoke the appearance of complex behavior, which, to the unsuspecting human observer, may even appear to be driven by emotions such as fear, aggression, and even love (Braitenberg, Vehikel. Experimente mit künstlichen Wesen, Lit Verlag, 2004). In fact, humans appear to have a strong propensity to anthropomorphize, driven by our inherent desire for predictability that will quickly lead us to discern patterns, cause-and-effect relationships, and yes, emotions, in animated entities, be they natural or artificial. But might there be reasons, that we should intentionally “implement” emotions into artificial entities, such as robots? How would we proceed in creating robot emotions? And what, if any, are the ethical implications of creating “emotional” robots? The following article aims to shed some light on these questions with a multi-disciplinary review of recent empirical investigations into the various facets of emotions in robot psychology.", "title": "" }, { "docid": "fc036d58e966b72fc9f0c9a4c156b5a7", "text": "OBJECTIVE\nWe sought to estimate the prevalence of pelvic organ prolapse in older women using the Pelvic Organ Prolapse Quantification examination and to identify factors associated with prolapse.\n\n\nMETHODS\nWomen with a uterus enrolled at one site of the Women's Health Initiative Hormone Replacement Therapy randomized clinical trial were eligible for this ancillary cross-sectional study. Subjects underwent a Pelvic Organ Prolapse Quantification examination during a maximal Valsalva maneuver and in addition completed a questionnaire. Logistic regression was used to identify independent risk factors for each of 2 definitions of prolapse: 1) Pelvic Organ Prolapse Quantification stage II or greater and 2) the leading edge of prolapse measured at the hymen or below.\n\n\nRESULTS\nIn 270 participants, age (mean +/- SD) was 68.3 +/- 5.6 years, body mass index was 30.4 +/- 6.2 kg/m(2), and vaginal parity (median [range]) was 3 (0-12). The proportions of Pelvic Organ Prolapse Quantification stages (95% confidence intervals [CIs]) were stage 0, 2.3% (95% CI 0.8-4.8%); stage I, 33.0% (95% CI 27.4-39.0%); stage II, 62.9% (95% CI 56.8-68.7%); and stage III, 1.9% (95% CI 0.6-4.3%). In 25.2% (95% CI 20.1-30.8%), the leading edge of prolapse was at the hymen or below. Hormone therapy was not associated with prolapse (P =.9). On multivariable analysis, less education (odds ratio [OR] 2.16, 95% CI 1.10-4.24) and higher vaginal parity (OR 1.61, 95% CI 1.03-2.50) were associated with prolapse when defined as stage II or greater. For prolapse defined by the leading edge at or below the hymen, older age had a decreased risk (OR 0.50, 95% CI 0.27-0.92) and less education, and larger babies had an increased risk (OR 2.38, 95% CI 1.31-4.32 and OR 1.97, 95% CI 1.07-3.64, respectively).\n\n\nCONCLUSION\nSome degree of prolapse is nearly ubiquitous in older women, which should be considered in the development of clinically relevant definitions of prolapse. Risk factors for prolapse differed depending on the definition of prolapse used.", "title": "" }, { "docid": "936ec38ed893a84557d6f8ac5921ed17", "text": "Intrauterine growth restriction (IUGR) is the failure of the fetus to achieve his/her intrinsic growth potential. IUGR results in significant perinatal and long-term complications, including the development of insulin resistance/metabolic syndrome in adulthood [5]. Accurate and effective monitoring of fetal growth is one of the key component of prenatal care [3]. Ultrasound evaluation is considered the cornerstone of diagnosis and surveillance of the growth-restricted fetus [2]. Ultrasound measurements play a significant role in obstetrics as an accurate means for the estimation of the fetal age. Several parameters are used as aging parameters, the most important of which are the biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC) and femur length (FL). Serial measurement of these parameters over time is used to determine the fetal condition. Hence, consistency and reproducibility of measurements is an important issue. Consequently the automatic segmentation of anatomical structures in ultrasound imagery is a real challenge due to acoustic interferences (speckle noise) and artifacts which are inherent in these images. In this paper, a novel method is proposed for developing a Computer Aided Diagnosis (CAD) system for diagnosis and classification of IUGR foetuses. Diagnosis is performed by segmenting and extracting the required foetus features from an ultrasound image, using the Re-initialization free level set with Reaction Diffusion (RD) technique. An artificial neural network (ANN) classifier is developed, the features extracted are provided to the designed ANN model. The ANN then classifies normal and abnormal fetuses based on features provided.", "title": "" }, { "docid": "4ac804a7476560cc42982e0dfd5ff0b2", "text": "In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown 'chemical space' to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for 'chemical space', which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/.", "title": "" }, { "docid": "8f5028ec9b8e691a21449eef56dc267e", "text": "It can be shown that by replacing the sigmoid activation function often used in neural networks with an exponential function, a neural network can be formed which computes nonlinear decision boundaries. This technique yields decision surfaces which approach the Bayes optimal under certain conditions. There is a continuous control of the linearity of the decision boundaries, from linear for small training sets to any degree of nonlinearity justified by larger training sets. A four-layer neural network of the type proposed can map any input pattern to any number of classifications. The input variables can be either continuous or binary. Modification of the decision boundaries based on new data can be accomplished in real time simply by defining a set of weights equal to the new training vector. The decision boundaries can be implemented using analog 'neurons', which operate entirely in parallel. The organization proposed takes into account the projected pin limitations of neural-net chips of the near future. By a change in architecture, these same components could be used as associative memories, to compute nonlinear multivariate regression surfaces, or to compute a posteriori probabilities of an event.<<ETX>>", "title": "" }, { "docid": "97a411a4a69cd3c4a4fd63adc9438e31", "text": "In this paper, we propose a novel Global Norm-Aware Pooling (GNAP) block, which reweights local features in a convolutional neural network (CNN) adaptively according to their L2 norms and outputs a global feature vector with a global average pooling layer. Our GNAP block is designed to give dynamic weights to local features in different spatial positions without losing spatial symmetry. We use a GNAP block in a face feature embedding CNN to produce discriminative face feature vectors for poserobust face recognition. The GNAP block is of very cheap computational cost, but it is very powerful for frontal-profile face recognition. Under the CFP frontal-profile protocol, the GNAP block can not only reduce EER dramatically but also boost TPR@FPR=0.1% (TPR i.e. True Positive Rate, FPR i.e. False Positive Rate) substantially. Our experiments show that the GNAP block greatly promotes pose-robust face recognition over the base model especially at low false positive rate.", "title": "" }, { "docid": "de1fe89adbc6e4a8993eb90cae39d97e", "text": "Decision trees have proved to be valuable tools for the description, classification and generalization of data. Work on constructing decision trees from data exists in multiple disciplines such as statistics, pattern recognition, decision theory, signal processing, machine learning and artificial neural networks. Researchers in these disciplines, sometimes working on quite different problems, identified similar issues and heuristics for decision tree construction. This paper surveys existing work on decision tree construction, attempting to identify the important issues involved, directions the work has taken and the current state of the art.", "title": "" }, { "docid": "602077b20a691854102946757da4b287", "text": "For three-dimensional (3D) ultrasound imaging, connecting elements of a two-dimensional (2D) transducer array to the imaging system's front-end electronics is a challenge because of the large number of array elements and the small element size. To compactly connect the transducer array with electronics, we flip-chip bond a 2D 16 times 16-element capacitive micromachined ultrasonic transducer (CMUT) array to a custom-designed integrated circuit (IC). Through-wafer interconnects are used to connect the CMUT elements on the top side of the array with flip-chip bond pads on the back side. The IC provides a 25-V pulser and a transimpedance preamplifier to each element of the array. For each of three characterized devices, the element yield is excellent (99 to 100% of the elements are functional). Center frequencies range from 2.6 MHz to 5.1 MHz. For pulse-echo operation, the average -6-dB fractional bandwidth is as high as 125%. Transmit pressures normalized to the face of the transducer are as high as 339 kPa and input-referred receiver noise is typically 1.2 to 2.1 rnPa/ radicHz. The flip-chip bonded devices were used to acquire 3D synthetic aperture images of a wire-target phantom. Combining the transducer array and IC, as shown in this paper, allows for better utilization of large arrays, improves receive sensitivity, and may lead to new imaging techniques that depend on transducer arrays that are closely coupled to IC electronics.", "title": "" }, { "docid": "9a97ba6e4b4e80af129fdf48964017f2", "text": "Automatically categorizing documents into pre-defined topic hierarchies or taxonomies is a crucial step in knowledge and content management. Standard machine learning techniques like Support Vector Machines and related large margin methods have been successfully applied for this task, albeit the fact that they ignore the inter-class relationships. In this paper, we propose a novel hierarchical classification method that generalizes Support Vector Machine learning and that is based on discriminant functions that are structured in a way that mirrors the class hierarchy. Our method can work with arbitrary, not necessarily singly connected taxonomies and can deal with task-specific loss functions. All parameters are learned jointly by optimizing a common objective function corresponding to a regularized upper bound on the empirical loss. We present experimental results on the WIPO-alpha patent collection to show the competitiveness of our approach.", "title": "" }, { "docid": "75cea8f2afbcd65c2a8c024ed1a1efcd", "text": "Communications in datacenter jobs (such as the shuffle operations in MapReduce applications) often involve many parallel flows, which may be processed simultaneously. This highly parallel structure presents new scheduling challenges in optimizing job-level performance objectives in data centers. Chowdhury and Stoica introduced the coflow abstraction to capture these communication patterns, and recently Chowdhury et al. developed effective heuristics to schedule coflows. In this paper, we consider the problem of efficiently scheduling coflows with release dates so as to minimize the total weighted completion time, which has been shown to be strongly NP-hard. Our main result is the first polynomial-time deterministic approximation algorithm for this problem, with an approximation ratio of 67/3, and a randomized version of the algorithm, with a ratio of 9+16√2/3. Our results use techniques from both combinatorial scheduling and matching theory, and rely on a clever grouping of coflows. We also run experiments on a Facebook trace to test the practical performance of several algorithms, including our deterministic algorithm. Our experiments suggest that simple algorithms provide effective approximations of the optimal, and that our deterministic algorithm has near-optimal performance.", "title": "" }, { "docid": "a1774a08ffefd28785fbf3a8f4fc8830", "text": "Bounds are given for the empirical and expected Rademacher complexity of classes of linear transformations from a Hilbert space H to a …nite dimensional space. The results imply generalization guarantees for graph regularization and multi-task subspace learning. 1 Introduction Rademacher averages have been introduced to learning theory as an e¢ cient complexity measure for function classes, motivated by tight, sample or distribution dependent generalization bounds ([10], [2]). Both the de…nition of Rademacher complexity and the generalization bounds extend easily from realvalued function classes to function classes with values in R, as they are relevant to multi-task learning ([1], [12]). There has been an increasing interest in multi-task learning which has shown to be very e¤ective in experiments ([7], [1]), and there have been some general studies of its generalisation performance ([4], [5]). For a large collection of tasks there are usually more data available than for a single task and these data may be put to a coherent use by some constraint of ’relatedness’. A practically interesting case is linear multi-task learning, extending linear large margin classi…ers to vector valued large-margin classi…ers. Di¤erent types of constraints have been proposed: Evgeniou et al ([8], [9]) propose graph regularization, where the vectors de…ning the classi…ers of related tasks have to be near each other. They also show that their scheme can be implemented in the framework of kernel machines. Ando and Zhang [1] on the other hand require the classi…ers to be members of a common low dimensional subspace. They also give generalization bounds using Rademacher complexity, but these bounds increase with the dimension of the input space. This paper gives dimension free bounds which apply to both approaches. 1.1 Multi-task generalization and Rademacher complexity Suppose we have m classi…cation tasks, represented by m independent random variables X ; Y l taking values in X f 1; 1g, where X l models the random", "title": "" }, { "docid": "49c7d088e4122831eddfe864a44b69ca", "text": "Common approaches to multi-label classification learn independent classifiers for each category, and employ ranking or thresholding schemes for classification. Because they do not exploit dependencies between labels, such techniques are only well-suited to problems in which categories are independent. However, in many domains labels are highly interdependent. This paper explores multi-label conditional random field (CRF)classification models that directly parameterize label co-occurrences in multi-label classification. Experiments show that the models outperform their single-label counterparts on standard text corpora. Even when multi-labels are sparse, the models improve subset classification error by as much as 40%.", "title": "" }, { "docid": "6046c04b170c68476affb306841c5043", "text": "Innovative ship design projects often require an extensive concept design phase to allow a wide range of potential solutions to be investigated, identifying which best suits the requirements. In these situations, the majority of ship design tools do not provide the best solution, limiting quick reconfiguration by focusing on detailed definition only. Parametric design, including generation of the hull surface, can model topology as well as geometry offering advantages often not exploited. Paramarine is an integrated ship design environment that is based on an objectorientated framework which allows the parametric connection of all aspects of both the product model and analysis together. Design configuration is managed to ensure that relationships within the model are topologically correct and kept up to date. While this offers great flexibility, concept investigation is streamlined by the Early Stage Design module, based on the (University College London) Functional Building Block methodology, collating design requirements, product model definition and analysis together to establish the form, function and layout of the design. By bringing this information together, the complete design requirements for the hull surface itself are established and provide the opportunity for parametric hull form generation techniques to have a fully integrated role in the concept design process. This paper explores several different hull form generation techniques which have been combined with the Early Stage Design module to demonstrate the capability of this design partnership.", "title": "" }, { "docid": "8f4a0c6252586fa01133f9f9f257ec87", "text": "The pls package implements principal component regression (PCR) and partial least squares regression (PLSR) in R (R Development Core Team 2006b), and is freely available from the Comprehensive R Archive Network (CRAN), licensed under the GNU General Public License (GPL). The user interface is modelled after the traditional formula interface, as exemplified by lm. This was done so that people used to R would not have to learn yet another interface, and also because we believe the formula interface is a good way of working interactively with models. It thus has methods for generic functions like predict, update and coef. It also has more specialised functions like scores, loadings and RMSEP, and a flexible crossvalidation system. Visual inspection and assessment is important in chemometrics, and the pls package has a number of plot functions for plotting scores, loadings, predictions, coefficients and RMSEP estimates. The package implements PCR and several algorithms for PLSR. The design is modular, so that it should be easy to use the underlying algorithms in other functions. It is our hope that the package will serve well both for interactive data analysis and as a building block for other functions or packages using PLSR or PCR. We will here describe the package and how it is used for data analysis, as well as how it can be used as a part of other packages. Also included is a section about formulas and data frames, for people not used to the R modelling idioms.", "title": "" }, { "docid": "293e2cd2647740bb65849fed003eb4ac", "text": "In this paper we apply the Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) descriptor to the field of human action recognition. A video sequence is described as a collection of spatial-temporal words after the detection of space-time interest points and the description of the area around them. Our contribution has been in the description part, showing LBP-TOP to be a promising descriptor for human action classification purposes. We have also developed several extensions to the descriptor to enhance its performance in human action recognition, showing the method to be computationally efficient.", "title": "" }, { "docid": "2cc36985606c3d82b230165a8f025228", "text": "This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. In earlier work we had developed fluid-level control laws that achieve the first three objectives for arbitrary networks and delays, but were forced to constrain the resource allocation policy. In this paper we extend the theory to include dynamics at TCP sources, preserving the earlier features at fast time-scales, but permitting sources to match their steady-state preferences, provided a bound on round-trip-times is known. We develop two packet-level implementations of this protocol, using (i) ECN marking, and (ii) queueing delay, as means of communicating the congestion measure from links to sources. We discuss parameter choices and demonstrate using ns-2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness. We also demonstrate the scalability of these features to increases in capacity, delay, and load, in comparison with other deployed and proposed protocols.", "title": "" } ]
scidocsrr
de92fa224418e7390b88371f269e50ab
Convolutional neural network based solar photovoltaic panel detection in satellite photos
[ { "docid": "be4defd26cf7c7a29a85da2e15132be9", "text": "The quantity of rooftop solar photovoltaic (PV) installations has grown rapidly in the US in recent years. There is a strong interest among decision makers in obtaining high quality information about rooftop PV, such as the locations, power capacity, and energy production of existing rooftop PV installations. Solar PV installations are typically connected directly to local power distribution grids, and therefore it is important for the reliable integration of solar energy to have information at high geospatial resolutions: by county, zip code, or even by neighborhood. Unfortunately, traditional means of obtaining this information, such as surveys and utility interconnection filings, are limited in availability and geospatial resolution. In this work a new approach is investigated where a computer vision algorithm is used to detect rooftop PV installations in high resolution color satellite imagery and aerial photography. It may then be possible to use the identified PV images to estimate power capacity and energy production for each array of panels, yielding a fast, scalable, and inexpensive method to obtain rooftop PV estimates for regions of any size. The aim of this work is to investigate the feasibility of the first step of the proposed approach: detecting rooftop PV in satellite imagery. Towards this goal, a collection of satellite rooftop images is used to develop and evaluate a detection algorithm. The results show excellent detection performance on the testing dataset and that, with further development, the proposed approach may be an effective solution for fast and scalable rooftop PV information collection.", "title": "" }, { "docid": "4e37fee25234a84a32b2ffc721ade2f8", "text": "Over the last decade, the deep neural networks are a hot topic in machine learning. It is breakthrough technology in processing images, video, speech, text and audio. Deep neural network permits us to overcome some limitations of a shallow neural network due to its deep architecture. In this paper we investigate the nature of unsupervised learning in restricted Boltzmann machine. We have proved that maximization of the log-likelihood input data distribution of restricted Boltzmann machine is equivalent to minimizing the cross-entropy and to special case of minimizing the mean squared error. Thus the nature of unsupervised learning is invariant to different training criteria. As a result we propose a new technique called “REBA” for the unsupervised training of deep neural networks. In contrast to Hinton’s conventional approach to the learning of restricted Boltzmann machine, which is based on linear nature of training rule, the proposed technique is founded on nonlinear training rule. We have shown that the classical equations for RBM learning are a special case of the proposed technique. As a result the proposed approach is more universal in contrast to the traditional energy-based model. We demonstrate the performance of the REBA technique using wellknown benchmark problem. The main contribution of this paper is a novel view and new understanding of an unsupervised learning in deep neural networks.", "title": "" } ]
[ { "docid": "fd2d04af3b259a433eb565a41b11ffbd", "text": "OVERVIEW • We develop novel orthogonality regularizations on training deep CNNs, by borrowing ideas and tools from sparse optimization. • These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. • The proposed regularizations can consistently improve the performances of baseline deep networks on CIFAR-10/100, ImageNet and SVHN datasets, based on intensive empirical experiments, as well as accelerate/stabilize the training curves. • The proposed orthogonal regularizations outperform existing competitors.", "title": "" }, { "docid": "4b6405372df6c1167c4e4738b1dc0f3d", "text": "Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are deep in context. We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.", "title": "" }, { "docid": "a88ce42c9c974093a6f0226f6c8f6cf7", "text": "We introduce a new method DOLORES for learning knowledge graph embeddings that effectively captures contextual cues and dependencies among entities and relations. First, we note that short paths on knowledge graphs comprising of chains of entities and relations can encode valuable information regarding their contextual usage. We operationalize this notion by representing knowledge graphs not as a collection of triples but as a collection of entity-relation chains, and learn embeddings for entities and relations using deep neural models that capture such contextual usage. In particular, our model is based on Bi-Directional LSTMs and learn deep representations of entities and relations from constructed entity-relation chains. We show that these representations can very easily be incorporated into existing models to significantly advance the state of the art on several knowledge graph prediction tasks like link prediction, triple classification, and missing relation type prediction (in some cases by at least 9.5%).", "title": "" }, { "docid": "d7307a2d0c3d4a9622bd8e137e124562", "text": "BACKGROUND\nConsumers of research (researchers, administrators, educators and clinicians) frequently use standard critical appraisal tools to evaluate the quality of published research reports. However, there is no consensus regarding the most appropriate critical appraisal tool for allied health research. We summarized the content, intent, construction and psychometric properties of published, currently available critical appraisal tools to identify common elements and their relevance to allied health research.\n\n\nMETHODS\nA systematic review was undertaken of 121 published critical appraisal tools sourced from 108 papers located on electronic databases and the Internet. The tools were classified according to the study design for which they were intended. Their items were then classified into one of 12 criteria based on their intent. Commonly occurring items were identified. The empirical basis for construction of the tool, the method by which overall quality of the study was established, the psychometric properties of the critical appraisal tools and whether guidelines were provided for their use were also recorded.\n\n\nRESULTS\nEighty-seven percent of critical appraisal tools were specific to a research design, with most tools having been developed for experimental studies. There was considerable variability in items contained in the critical appraisal tools. Twelve percent of available tools were developed using specified empirical research. Forty-nine percent of the critical appraisal tools summarized the quality appraisal into a numeric summary score. Few critical appraisal tools had documented evidence of validity of their items, or reliability of use. Guidelines regarding administration of the tools were provided in 43% of cases.\n\n\nCONCLUSIONS\nThere was considerable variability in intent, components, construction and psychometric properties of published critical appraisal tools for research reports. There is no \"gold standard' critical appraisal tool for any study design, nor is there any widely accepted generic tool that can be applied equally well across study types. No tool was specific to allied health research requirements. Thus interpretation of critical appraisal of research reports currently needs to be considered in light of the properties and intent of the critical appraisal tool chosen for the task.", "title": "" }, { "docid": "617e0aece2a082947b7e7eadd12f280a", "text": "Segment routing is a new proposed routing mechanism for simplified and flexible path control in IP/MPLS networks. It builds on existing network routing and connection management protocols and one of its important features is the automatic rerouting of connections upon failure. Re-routing can be done with available restoration mechanisms including IGP-based rerouting and fast reroute with loop-free alternates. This is particularly attractive for use in Software Defined Networks (SDN) because the central controller need only be involved at connection set-up time and failures are handled automatically in a distributed manner. A significant challenge in restoration optimization in segment routed networks is the centralized determination of connections primary paths so as to enable the best sharing of restoration bandwidth over non-simultaneous network failures. We formulate this problem as a linear programming problem and develop an efficient primal-dual algorithm for the solution. We also develop a simple randomized rounding scheme for cases when there are additional constraints on segment routing. We demonstrate the significant capacity benefits achievable from this optimized restoration with segment routing.", "title": "" }, { "docid": "f53f739dd526e3f954aabded123f0710", "text": "Successful Free/Libre Open Source Software (FLOSS) projects must attract and retain high-quality talent. Researchers have invested considerable effort in the study of core and peripheral FLOSS developers. To this point, one critical subset of developers that have not been studied are One-Time code Contributors (OTC) – those that have had exactly one patch accepted. To understand why OTCs have not contributed another patch and provide guidance to FLOSS projects on retaining OTCs, this study seeks to understand the impressions, motivations, and barriers experienced by OTCs. We conducted an online survey of OTCs from 23 popular FLOSS projects. Based on the 184 responses received, we observed that OTCs generally have positive impressions of their FLOSS project and are driven by a variety of motivations. Most OTCs primarily made contributions to fix bugs that impeded their work and did not plan on becoming long term contributors. Furthermore, OTCs encounter a number of barriers that prevent them from continuing to contribute to the project. Based on our findings, there are some concrete actions FLOSS projects can take to increase the chances of converting OTCs into long-term contributors.", "title": "" }, { "docid": "bb2768b4df20d48da75d6ffc5e239603", "text": "A motor for in-wheel electric vehicle (EV) requires high efficiency and specific torque. In view of this, permanent-magnet brushless dc (PM BLDC) motor is most commonly employed for this application. However, due to the increasing cost of PMs, machines that do not use PMs are attracting interest. Switched reluctance motor (SRM), with its simple and robust construction, along with fault tolerant operation, is a viable option for in-wheel EV application. However, the SRM has low specific torque as compared with BLDC. Therefore, design improvements are required to make SRM a viable alternative to BLDC motor. In this paper, a new 12/26 pole SRM with high specific torque is proposed for in-wheel EV application. This machine has segmented-rotor-type construction. Also, concentrated-winding arrangement is used, ensuring low end-winding volume and copper loss. The developed machine also has high efficiency. In order to verify the design, the prototype of the machine is fabricated, and experimental results are presented.", "title": "" }, { "docid": "94898f9441ecd738e6b6200b7eec87d0", "text": "In the last decade, most countries in Latin America and the Caribbean have not spent enough on infrastructure. Total investment has fallen as a percentage of GDP, as public infrastructure expenditure has borne the brunt of fiscal adjustment, and private investment has failed to take up the slack. Most infrastructure services have therefore lagged behind East Asian comparators, middle income countries in general and China, in terms of both coverage and quality, despite the generally positive impacts of private sector involvement. This lackluster performance has slowed LAC’s economic growth and progress in poverty reduction. Countries of the region therefore need to focus on upgrading their infrastructure, as this can yield great dividends in terms of growth, competitiveness and poverty reduction, as well as improving the quality of life of their citizens. Catching up requires significant new investment. But first, measures need to be taken to ensure that infrastructure spending produces higher returns, both economic and social. Both these tasks involve multiple challenges. Public investment should be better allocated, with greater priority given to maintenance and rehabilitation against higher profile new projects. Small-scale local providers and cheaper technologies should be used for infrastructure work and services where appropriate. The considerable state resources already spent on subsidies, especially for water and electricity, need to be radically retargeted, to benefit fewer of the non-poor and more of those in need. More active policies are also needed to extend affordable coverage to rural areas and the urban poor, many of whom remain underserved. Considerable further financing will also be necessary. There is scope in many countries for user charges to generate more funding, particularly in the water and electricity sectors. Raising tariffs to cost recovery levels would be affordable to the great majority of the population in most countries, and a more effective application of the funds currently spent on subsidies would protect low-income groups. To reinvigorate private sector investment, governments need to find ways to make the risk-return ratio of projects more attractive. Improving contract design, making award processes more transparent and competitive and strengthening regulation will promote efficiency and better service, address investor concerns and reduce the cost of capital through lower regulatory risk. Such moves will also help reduce the renegotiation of concessions, which has been too frequent in Latin America and has damaged the credibility of contracting. However, greater care must go into risk management, and the correct identification and allocation of risks in concessions. State guarantees can be useful for attracting the private sector, but ill-considered commitments made in the past, sometimes using unrealistic demand projections or excessive compensation schedules have exposed governments to enormous contingent liabilities. Public opposition also represents a significant challenge to private sector involvement, politically and sometimes even operationally. Better subsidies, stronger and more transparent contract awards and regulation, as well as macroeconomic strengthening, are all likely to improve sentiment. Governments also need to improve the public perception of privatization, by making sure that the job losses and tariff hikes that reforms may involve do not coincide, and become associated, with private entry. But they also need to avoid creating unrealistic expectations of the benefits that private involvement may bring. Amid continuing caution from international investors toward emerging markets, governments will also need to tap other sources to meet their infrastructure funding needs. Greater use can be made of local capital markets, if instruments are designed creatively. And while few governments currently have much room in their budgets for additional infrastructure investment, the great potential returns of many infrastructure projects in the long-term may warrant increases in allocations in many countries.", "title": "" }, { "docid": "28e8bc5b0d1fa9fa46b19c8c821a625c", "text": "This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.", "title": "" }, { "docid": "80a61f27dab6a8f71a5c27437254778b", "text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.", "title": "" }, { "docid": "df85e65b4647f355453cd660bb8a7ce3", "text": "In this paper we give a unified asymptotic formula for the partial gcd-sum function. We also study the mean-square of the error in the asymptotic formula.", "title": "" }, { "docid": "ad49595bd04c3285be2939e4ced77551", "text": "Embedded systems have found a very strong foothold in global Information Technology (IT) market since they can provide very specialized and intricate functionality to a wide range of products. On the other hand, the migration of IT functionality to a plethora of new smart devices (like mobile phones, cars, aviation, game or households machines) has enabled the collection of a considerable number of data that can be characterized sensitive. Therefore, there is a need for protecting that data through IT security means. However, eare usually dployed in hostile environments where they can be easily subject of physical attacks. In this paper, we provide an overview from ES hardware perspective of methods and mechanisms for providing strong security and trust. The various categories of physical attacks on security related embedded systems are presented along with countermeasures to thwart them and the importance of reconfigurable logic flexibility, adaptability and scalability along with trust protection mechanisms is highlighted. We adopt those mechanisms in order to propose a FPGA based embedded system hardware architecture capable of providing security and trust along with physical attack protection using trust zone separation. The benefits of such approach are discussed and a subsystem of the proposed architecture is implemented in FPGA technology as a proof of concept case study. From the performed analysis and implementation, it is concluded that flexibility, security and trust are fully realistic options for embedded system security enhancement. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bbb1dc09e41e08e095a48e9e2a806356", "text": "Using the inexpensive Raspberry Pi to automate the tasks at home such as switching appliances on & off over Wi-Fi (Wireless Fidelity) or LAN(Local Area Network) using a personal computer or a mobile or a tablet through the browser. This can also be done by using the dedicated Android application. The conventional switch boards will be added with a touch screen or replaced with a touch screen to match the taste of the user's home decor. PIR (Passive Infrared Sensor) sensor will be used to detect human detection and automate the on and off functionality.", "title": "" }, { "docid": "d7b77fae980b3bc26ffb4917d6d093c1", "text": "This work presents a combination of a teach-and-replay visual navigation and Monte Carlo localization methods. It improves a reliable teach-and-replay navigation method by replacing its dependency on precise dead-reckoning by introducing Monte Carlo localization to determine robot position along the learned path. In consequence, the navigation method becomes robust to dead-reckoning errors, can be started from at any point in the map and can deal with the ‘kidnapped robot’ problem. Furthermore, the robot is localized with MCL only along the taught path, i.e. in one dimension, which does not require a high number of particles and significantly reduces the computational cost. Thus, the combination of MCL and teach-and-replay navigation mitigates the disadvantages of both methods. The method was tested using a P3-AT ground robot and a Parrot AR.Drone aerial robot over a long indoor corridor. Experiments show the validity of the approach and establish a solid base for continuing this work.", "title": "" }, { "docid": "60de343325a305b08dfa46336f2617b5", "text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.", "title": "" }, { "docid": "c676ccb53845c7108e07d9b08bccab46", "text": "-This paper is describing the recently introduced proportional-resonant (PR) controllers and their suitability for grid-connected converters current control. It is shown that the known shortcomings associated with PI controllers like steady-state error for single-phase converters and the need of decoupling for three-phase converters can be alleviated. Additionally, selective harmonic compensation is also possible with PR controllers. Suggested control-diagrams for three-phase grid converters and active filters are also presented. A practical application of PR current control for a photovoltaic (PV) inverter is also described. Index Terms current controller, grid converters, photovoltaic inverter", "title": "" }, { "docid": "e81d3f48d7213720f489f52852cfbfa3", "text": "HE BRITISH ROCK GROUP Radiohead has carved out a unique place in the post-millennial rock milieu by tempering their highly experimental idiolect with structures more commonly heard in Top Forty rock styles. 1 In what I describe as a Goldilocks principle, much of their music after OK Computer (1997) inhabits a space between banal convention and sheer experimentation—a dichotomy which I have elsewhere dubbed the 'Spears–Stockhausen Continuum.' 2 In the timbral domain, the band often introduces sounds rather foreign to rock music such as the ondes Martenot and highly processed lead vocals within textures otherwise dominated by guitar, bass, and drums (e.g., 'The National Anthem,' 2000), and song forms that begin with paradigmatic verse–chorus structures often end with new material instead of a recapitulated chorus (e.g., 'All I Need,' 2007). In this T", "title": "" }, { "docid": "0896b478eaa3a50ecea265d9e46496c3", "text": "Domestic revenue mobilisation is key to sustainable development finance – only self-sufficiency will allow the development of fully-functioning states with flourishing systems of political representation and economies reflecting societies’ expressed preferences in regard to, for example, inequality. Tax evasion and tax avoidance are important insofar as they affect both the volume and nature of government finances. This paper estimates the total cost to developing countries of these leakages as US$385 billion annually, dwarfing any potential increase in aid. An additional result suggests that doubling aid to low income countries may have little positive revenue effect but damage the strength of political representation, if full trade liberalisation is simultaneously required.", "title": "" }, { "docid": "620574da26151188171a91eb64de344d", "text": "Major security issues for banking and financial institutions are Phishing. Phishing is a webpage attack, it pretends a customer web services using tactics and mimics from unauthorized persons or organization. It is an illegitimate act to steals user personal information such as bank details, social security numbers and credit card details, by showcasing itself as a truthful object, in the public network. When users provide confidential information, they are not aware of the fact that the websites they are using are phishing websites. This paper presents a technique for detecting phishing website attacks and also spotting phishing websites by combines source code and URL in the webpage. Keywords—Phishing, Website attacks, Source Code, URL.", "title": "" } ]
scidocsrr
636395e5beeb9a5c851eb65d9630c1ae
Preventing Private Information Inference Attacks on Social Networks
[ { "docid": "1aa01ca2f1b7f5ea8ed783219fe83091", "text": "This paper presents NetKit, a modular toolkit for classifica tion in networked data, and a case-study of its application to a collection of networked data sets use d in prior machine learning research. Networked data are relational data where entities are inter connected, and this paper considers the common case where entities whose labels are to be estimated a re linked to entities for which the label is known. NetKit is based on a three-component framewo rk, comprising a local classifier, a relational classifier, and a collective inference procedur . Various existing relational learning algorithms can be instantiated with appropriate choices for the se three components and new relational learning algorithms can be composed by new combinations of c omponents. The case study demonstrates how the toolkit facilitates comparison of differen t learning methods (which so far has been lacking in machine learning research). It also shows how the modular framework allows analysis of subcomponents, to assess which, whether, and when partic ul components contribute to superior performance. The case study focuses on the simple but im portant special case of univariate network classification, for which the only information avai lable is the structure of class linkage in the network (i.e., only links and some class labels are avail ble). To our knowledge, no work previously has evaluated systematically the power of class-li nkage alone for classification in machine learning benchmark data sets. The results demonstrate clea rly th t simple network-classification models perform remarkably well—well enough that they shoul d be used regularly as baseline classifiers for studies of relational learning for networked dat a. The results also show that there are a small number of component combinations that excel, and that different components are preferable in different situations, for example when few versus many la be s are known.", "title": "" } ]
[ { "docid": "91c9dcfd3428fb79afd8d99722c95b69", "text": "In this article we describe results of our research on the disambiguation of user queries using ontologies for categorization. We present an approach to cluster search results by using classes or “Sense Folders” ~prototype categories! derived from the concepts of an assigned ontology, in our case WordNet. Using the semantic relations provided from such a resource, we can assign categories to prior, not annotated documents. The disambiguation of query terms in documents with respect to a user-specific ontology is an important issue in order to improve the retrieval performance for the user. Furthermore, we show that a clustering process can enhance the semantic classification of documents, and we discuss how this clustering process can be further enhanced using only the most descriptive classes of the ontology. © 2006 Wiley Periodicals, Inc.", "title": "" }, { "docid": "0084d9c69d79a971e7139ab9720dd846", "text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.", "title": "" }, { "docid": "88602ba9bcb297af04e58ed478664ee5", "text": "Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.", "title": "" }, { "docid": "a01a1bb4c5f6fc027384aa40e495eced", "text": "Sentiment classification of grammatical constituents can be explained in a quasicompositional way. The classification of a complex constituent is derived via the classification of its component constituents and operations on these that resemble the usual methods of compositional semantic analysis. This claim is illustrated with a description of sentiment propagation, polarity reversal, and polarity conflict resolution within various linguistic constituent types at various grammatical levels. We propose a theoretical composition model, evaluate a lexical dependency parsing post-process implementation, and estimate its impact on general NLP pipelines.", "title": "" }, { "docid": "5cc4b9d01928678d9099548fc31abc94", "text": "Educational process mining (EPM) is an emerging field in educational data mining (EDM) aiming to make unexpressed knowledge explicit and to facilitate better understanding of the educational process. EPM uses log data gathered specifically from educational environments in order to discover, analyze, and provide a visual representation of the complete educational process. This paper introduces EPM and elaborates on some of the potential of this technology in the educational domain. It also describes some other relevant, related areas such as intentional mining, sequential pattern mining and graph mining. It highlights the components of an EPM framework and it describes the different challenges when handling event logs and other generic issues. It describes the data, tools, techniques and models used in EPM. In addition, the main work in this area is described and grouped by educational application domains. © 2017 Wiley Periodicals, Inc.", "title": "" }, { "docid": "e8792ced13f1be61d031e2b150cc5cf6", "text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.", "title": "" }, { "docid": "c5fbbdc6da326b08c734ac1f5daf76d1", "text": "Sentiment classification in Chinese microblogs is more challenging than that of Twitter for numerous reasons. In this paper, two kinds of approaches are proposed to classify opinionated Chinesemicroblog posts: 1) lexicon-based approaches combining Simple Sentiment Word-Count Method with 3 Chinese sentiment lexicons, 2) machine learning models with multiple features. According to our experiment, lexicon-based approaches can yield relatively fine results and machine learning classifiers outperform both the majority baseline and lexicon-based approaches. Among all the machine learning-based approaches, Random Forests works best and the results are satisfactory.", "title": "" }, { "docid": "6d2449941d27774451edde784d3521fe", "text": "Convolutional neural networks (CNNs) have recently been applied to the optical flow estimation problem. As training the CNNs requires sufficiently large amounts of labeled data, existing approaches resort to synthetic, unrealistic datasets. On the other hand, unsupervised methods are capable of leveraging real-world videos for training where the ground truth flow fields are not available. These methods, however, rely on the fundamental assumptions of brightness constancy and spatial smoothness priors that do not hold near motion boundaries. In this paper, we propose to exploit unlabeled videos for semi-supervised learning of optical flow with a Generative Adversarial Network. Our key insight is that the adversarial loss can capture the structural patterns of flow warp errors without making explicit assumptions. Extensive experiments on benchmark datasets demonstrate that the proposed semi-supervised algorithm performs favorably against purely supervised and baseline semi-supervised learning schemes.", "title": "" }, { "docid": "6c9f3107fbf14f5bef1b8edae1b9d059", "text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.", "title": "" }, { "docid": "5295cd5811b6f86e3dbe6154d9ae5659", "text": "While swarm robotics systems are often claimed to be highly faulttolerant, so far research has limited its attention to safe laboratory settings and has virtually ignored security issues in the presence of Byzantine robotsÐi.e., robots with arbitrarily faulty or malicious behavior. However, in many applications one or more Byzantine robots may suffice to let current swarm coordination mechanisms fail with unpredictable or disastrous outcomes. In this paper, we provide a proof-of-concept for managing security issues in swarm robotics systems via blockchain technology. Our approach uses decentralized programs executed via blockchain technology (blockchain-based smart contracts) to establish secure swarm coordination mechanisms and to identify and exclude Byzantine swarm members. We studied the performance of our blockchain-based approach in a collective decision-making scenario both in the presence and absence of Byzantine robots and compared our results to those obtained with an existing collective decision approach. The results show a clear advantage of the blockchain approach when Byzantine robots are part of the swarm.", "title": "" }, { "docid": "610f1288ffa85573f0c161d65ca5f9d9", "text": "User authentication depends largely on the concept of passwords. However, users find it difficult to remember alphanumerical passwords over time. When user is required to choose a secure password, they tend to choose an easy, short and insecure password. Graphical password method is proposed as an alternative solution to text-based alphanumerical passwords. The reason of such proposal is that human brain is better in recognizing and memorizing pictures compared to traditional alphanumerical string. Therefore, in this paper, we propose a conceptual framework to better understand the user performance for new high-end graphical password method. Our proposed framework is based on hybrid approach combining different features into one. The user performance experimental analysis pointed out the effectiveness of the proposed framework.", "title": "" }, { "docid": "49af355cfc9e13234a2a3b115f225c1b", "text": "Tattoos play an important role in many religions. Tattoos have been used for thousands of years as important tools in ritual and tradition. Judaism, Christianity, and Islam have been hostile to the use of tattoos, but many religions, in particular Buddhism and Hinduism, make extensive use of them. This article examines their use as tools for protection and devotion.", "title": "" }, { "docid": "0ce556418f6557d86c59f178a206cd11", "text": "The efficiency of decision processes which can be divided into two stages has been measured for the whole process as well as for each stage independently by using the conventional data envelopment analysis (DEA) methodology in order to identify the causes of inefficiency. This paper modifies the conventional DEA model by taking into account the series relationship of the two sub-processes within the whole process. Under this framework, the efficiency of the whole process can be decomposed into the product of the efficiencies of the two sub-processes. In addition to this sound mathematical property, the case of Taiwanese non-life insurance companies shows that some unusual results which have appeared in the independent model do not exist in the relational model. In other words, the relational model developed in this paper is more reliable in measuring the efficiencies and consequently is capable of identifying the causes of inefficiency more accurately. Based on the structure of the model, the idea of efficiency decomposition can be extended to systems composed of multiple stages connected in series. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5d0c211333bd484e29c602b4996d1292", "text": "Humans tend to organize perceived information into hierarchies and structures, a principle that also applies to music. Even musically untrained listeners unconsciously analyze and segment music with regard to various musical aspects, for example, identifying recurrent themes or detecting temporal boundaries between contrasting musical parts. This paper gives an overview of state-of-theart methods for computational music structure analysis, where the general goal is to divide an audio recording into temporal segments corresponding to musical parts and to group these segments into musically meaningful categories. There are many different criteria for segmenting and structuring music audio. In particular, one can identify three conceptually different approaches, which we refer to as repetition-based, novelty-based, and homogeneitybased approaches. Furthermore, one has to account for different musical dimensions such as melody, harmony, rhythm, and timbre. In our state-of-the-art report, we address these different issues in the context of music structure analysis, while discussing and categorizing the most relevant and recent articles in this field.", "title": "" }, { "docid": "5bf761b94840bcab163ae3a321063b8b", "text": "The simulation method plays an important role in the investigation of the intrabody communication (IBC). Due to the problems of the transfer function and the corresponding parameters, only the simulation of the galvanic coupling IBC along the arm has been achieved at present. In this paper, a method for the mathematical simulation of the galvanic coupling IBC with different signal transmission paths has been introduced. First, a new transfer function of the galvanic coupling IBC was derived with the consideration of the internal resistances of the IBC devices. Second, the determination of the corresponding parameters used in the transfer function was discussed in detail. Finally, both the measurements and the simulations of the galvanic coupling IBC along the different signal transmission paths were carried out. Our investigation shows that the mathematical simulation results coincide with the measurement results over the frequency range from 100 kHz to 5 MHz, which indicates that the proposed method offers the significant advantages in the theoretical analysis and the application of the galvanic coupling IBC.", "title": "" }, { "docid": "e19e6ed491f5f95da5fd3950a5d36217", "text": "In the consumer credit industry, assessment of default risk is critically important for the financial health of both the lender and the borrower. Methods for predicting risk for an applicant using credit bureau and application data, typically based on logistic regression or survival analysis, are universally employed by credit card companies. Because of the manner in which the predictive models are fit using large historical sets of existing customer data that extend over many years, default trends, anomalies, and other temporal phenomena that result from dynamic economic conditions are not brought to light. We introduce a modification of the proportional hazards survival model that includes a time-dependency mechanism for capturing temporal phenomena, and we develop a maximum likelihood algorithm for fitting the model. Using a very large, real data set, we demonstrate that incorporating the time dependency can provide more accurate risk scoring, as well as important insight into dynamic market effects that can inform and enhance related decision making. Journal of the Operational Research Society (2012) 63, 306–321. doi:10.1057/jors.2011.34 Published online 11 May 2011", "title": "" }, { "docid": "eea5e2eddd2f1c19eed2e4bfd55cbb83", "text": "This paper presents a rule-based approach for finding out the stems from text in Bengali, a resource-poor language. It starts by introducing the concept of orthographic syllable, the basic orthographic unit of Bengali. Then it discusses the morphological structure of the tokens for different parts of speech, formalizes the inflection rule constructs and formulates a quantitative ranking measure for potential candidate stems of a token. These concepts are applied in the design and implementation of an extensible architecture of a stemmer system for Bengali text. The accuracy of the system is calculated to be ~89% and above.", "title": "" }, { "docid": "578130d8ef9d18041c84ed226af8c84a", "text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.", "title": "" }, { "docid": "4f4a3b9108786c77c1185c749cf3e010", "text": "Deep neural network (DNN) has emerged as a very important machine learning and pattern recognition technique in the big data era. Targeting to different types of training and inference tasks, the structure of DNN varies with flexible choices of different component layers, such as fully connection layer, convolutional layer, pooling layer and softmax layer. Deviated from other layers that only require simple operations like addition or multiplication, the softmax layer contains expensive exponentiation and division, thereby causing the hardware design of softmax layer suffering from high complexity, long critical path delay and overflow problems. This paper, for the first time, presents efficient hardware architecture of softmax layer in DNN. By utilizing the domain transformation technique and down-scaling approach, the proposed hardware architecture avoids the aforementioned problems. Analysis shows that the proposed hardware architecture achieves reduced hardware complexity and critical path delay.", "title": "" }, { "docid": "97065954a10665dee95977168b9e6c60", "text": "We describe the current status of Pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies.", "title": "" } ]
scidocsrr
a2e3f2cacf957a3c4c284d7e51d9ad1e
Reappraisal reconsidered : A closer look at the costs of an acclaimed emotion regulation strategy
[ { "docid": "75b4640071754d331783d26020f9ac7a", "text": "Traditionally, positive emotions and thoughts, strengths, and the satisfaction of basic psychological needs for belonging, competence, and autonomy have been seen as the cornerstones of psychological health. Without disputing their importance, these foci fail to capture many of the fluctuating, conflicting forces that are readily apparent when people navigate the environment and social world. In this paper, we review literature to offer evidence for the prominence of psychological flexibility in understanding psychological health. Thus far, the importance of psychological flexibility has been obscured by the isolation and disconnection of research conducted on this topic. Psychological flexibility spans a wide range of human abilities to: recognize and adapt to various situational demands; shift mindsets or behavioral repertoires when these strategies compromise personal or social functioning; maintain balance among important life domains; and be aware, open, and committed to behaviors that are congruent with deeply held values. In many forms of psychopathology, these flexibility processes are absent. In hopes of creating a more coherent understanding, we synthesize work in emotion regulation, mindfulness and acceptance, social and personality psychology, and neuropsychology. Basic research findings provide insight into the nature, correlates, and consequences of psychological flexibility and applied research provides details on promising interventions. Throughout, we emphasize dynamic approaches that might capture this fluid construct in the real-world.", "title": "" }, { "docid": "9f5fac3ed88722d5d1be43ff92ba9450", "text": "We examined the relationships between six emotion-regulation strategies (acceptance, avoidance, problem solving, reappraisal, rumination, and suppression) and symptoms of four psychopathologies (anxiety, depression, eating, and substance-related disorders). We combined 241 effect sizes from 114 studies that examined the relationships between dispositional emotion regulation and psychopathology. We focused on dispositional emotion regulation in order to assess patterns of responding to emotion over time. First, we examined the relationship between each regulatory strategy and psychopathology across the four disorders. We found a large effect size for rumination, medium to large for avoidance, problem solving, and suppression, and small to medium for reappraisal and acceptance. These results are surprising, given the prominence of reappraisal and acceptance in treatment models, such as cognitive-behavioral therapy and acceptance-based treatments, respectively. Second, we examined the relationship between each regulatory strategy and each of the four psychopathology groups. We found that internalizing disorders were more consistently associated with regulatory strategies than externalizing disorders. Lastly, many of our analyses showed that whether the sample came from a clinical or normative population significantly moderated the relationships. This finding underscores the importance of adopting a multi-sample approach to the study of psychopathology.", "title": "" }, { "docid": "93bca110f5551d8e62dc09328de83d4f", "text": "It is well established that emotion plays a key role in human social and economic decision making. The recent literature on emotion regulation (ER), however, highlights that humans typically make efforts to control emotion experiences. This leaves open the possibility that decision effects previously attributed to acute emotion may be a consequence of acute ER strategies such as cognitive reappraisal and expressive suppression. In Study 1, we manipulated ER of laboratory-induced fear and disgust, and found that the cognitive reappraisal of these negative emotions promotes risky decisions (reduces risk aversion) in the Balloon Analogue Risk Task and is associated with increased performance in the prehunch/hunch period of the Iowa Gambling Task. In Study 2, we found that naturally occurring negative emotions also increase risk aversion in Balloon Analogue Risk Task, but the incidental use of cognitive reappraisal of emotions impedes this effect. We offer evidence that the increased effectiveness of cognitive reappraisal in reducing the experience of emotions underlies its beneficial effects on decision making.", "title": "" } ]
[ { "docid": "b87cf41b31b8d163d6e44c9b1fa68cae", "text": "This paper gives a security analysis of Microsoft's ASP.NET technology. The main part of the paper is a list of threats which is structured according to an architecture of Web services and attack points. We also give a reverse table of threats against security requirements as well as a summary of security guidelines for IT developers. This paper has been worked out in collaboration with five University teams each of which is focussing on a different security problem area. We use the same architecture for Web services and attack points.", "title": "" }, { "docid": "df5cf5cd42e216ef723a6e2295a92f02", "text": "This integrative literature review assesses the relationship between hospital nurses' work environment characteristics and patient safety outcomes and recommends directions for future research based on examination of the literature. Using an electronic search of five databases, 18 studies published in English between 1999 and 2016 were identified for review. All but one study used a cross-sectional design, and only four used a conceptual/theoretical framework to guide the research. No definition of work environment was provided in most studies. Differing variables and instruments were used to measure patient outcomes, and findings regarding the effects of work environment on patient outcomes were inconsistent. To clarify the relationship between nurses' work environment characteristics and patient safety outcomes, researchers should consider using a longitudinal study design, using a theoretical foundation, and providing clear operational definitions of concepts. Moreover, given the inconsistent findings of previous studies, they should choose their measurement methodologies with care.", "title": "" }, { "docid": "d043a086f143c713e4c4e74c38e3040c", "text": "Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments. Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction. Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets. Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values. Conclusions: One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points. Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.", "title": "" }, { "docid": "6f8cc4d648f223840ca67550f1a3b6dd", "text": "Information interaction system plays an important role in establishing a real-time and high-efficient traffic management platform in Intelligent Transportation System (ITS) applications. However, the present transmission technology still exists some defects in satisfying with the real-time performance of users data demand in Vehicle-to-Vehicle (V2V) communication. In order to solve this problem, this paper puts forward a novel Node Operating System (NDOS) scheme to realize the real-time data exchange between vehicles with wireless communication chips of mobile devices, and creates a distributed information interaction system for the interoperability between devices from various manufacturers. In addition, optimized data forwarding scheme is discussed for NDOS to achieve better transmission property and channel resource utilization. Experiments have been carried out in Network Simulator 2 (NS2) evaluation environment, and the results suggest that the scheme can receive higher transmission efficiency and validity than existing communication skills.", "title": "" }, { "docid": "6b70f1ab7f836d5a2681c3f998393ed3", "text": "FOREST FIRES CAUSE MANY ENVIronmental disasters, creating economical and ecological damage as well as endangering people’s lives. Heightened interest in automatic surveillance and early forest-fire detection has taken precedence over traditional human surveillance because the latter’s subjectivity affects detection reliability, which is the main issue for forest-fire detection systems. In current systems, the process is tedious, and human operators must manually validate many false alarms. Our approach—the False Alarm Reduction system—proposes an alternative realtime infrared–visual system that overcomes this problem. The FAR system consists of applying new infrared-image processing techniques and Artificial Neural Networks (ANNs), using additional information from meteorological sensors and from a geographical information database, taking advantage of the information redundancy from visual and infrared cameras through a matching process, and designing a fuzzy expert rule base to develop a decision function. Furthermore, the system provides the human operator with new software tools to verify alarms.", "title": "" }, { "docid": "98ed823294928f0f281c36d5ae0a6071", "text": "Entity matching is a crucial and difficult task for data integration. An effective solution strategy typically has to combine several techniques and to find suitable settings for critical configuration parameters such as similarity thresholds. Supervised (trainingbased) approaches promise to reduce the manual work for determining (learning) effective strategies for entity matching. However, they critically depend on training data selection which is a difficult problem that has so far mostly been addressed manually by human experts. In this paper we propose a trainingbased framework called STEM for entity matching and present different generic methods for automatically selecting training data to combine and configure several matching techniques. We evaluate the proposed methods for different match tasks and smalland medium-sized training sets.", "title": "" }, { "docid": "7a5e65dde7af8fe05654ea9d5c3b7861", "text": "The objective of this paper is to provide a comparison among permanent magnet (PM) wind generators of different topologies. Seven configurations are chosen for the comparison, consisting of both radial-flux and axial-flux machines. The comparison is done at seven power levels ranging from 1 to 200 kW. The basis for the comparison is discussed and implemented in detail in the design procedure. The criteria used for comparison are considered to be critical for the efficient deployment of PM wind generators. The design data are optimized and verified by finite-element analysis and commercial generator test results. For a given application, the results provide an indication of the best-suited machine.", "title": "" }, { "docid": "ebf7457391e8f1e728508f9b5af7a19f", "text": "Argument mining studies in natural language text often use lexical (e.g. n-grams) and syntactic (e.g. grammatical production rules) features with all possible values. In prior work on a corpus of academic essays, we demonstrated that such large and sparse feature spaces can cause difficulty for feature selection and proposed a method to design a more compact feature space. The proposed feature design is based on post-processing a topic model to extract argument and domain words. In this paper we investigate the generality of this approach, by applying our methodology to a new corpus of persuasive essays. Our experiments show that replacing n-grams and syntactic rules with features and constraints using extracted argument and domain words significantly improves argument mining performance for persuasive essays.", "title": "" }, { "docid": "5516a1459b44b340c930e8a2ed3ca152", "text": "Laboratory testing is important in the diagnosis and monitoring of liver injury and disease. Current liver tests include plasma markers of injury (e.g. aminotransferases, γ-glutamyl transferase, and alkaline phosphatase), markers of function (e.g. prothrombin time, bilirubin), viral hepatitis serologies, and markers of proliferation (e.g. α-fetoprotein). Among the injury markers, the alanine and aspartate aminotransferases (ALT and AST, respectively) are the most commonly used. However, interpretation of ALT and AST plasma levels can be complicated. Furthermore, both have poor prognostic utility in acute liver injury and liver failure. New biomarkers of liver injury are rapidly being developed, and the US Food and Drug Administration the European Medicines Agency have recently expressed support for use of some of these biomarkers in drug trials. The purpose of this paper is to review the history of liver biomarkers, to summarize mechanisms and interpretation of ALT and AST elevation in plasma in liver injury (particularly acute liver injury), and to discuss emerging liver injury biomarkers that may complement or even replace ALT and AST in the future.", "title": "" }, { "docid": "5d85e552841fe415daa72dff2a5f9706", "text": "M any security faculty members and practitioners bemoan the lack of good books in the field. Those of us who teach often find ourselves forced to rely on collections of papers to fortify our courses. In the last few years, however, we've started to see the appearance of some high-quality books to support our endeavors. Matt Bishop's book—Com-puter Security: Art and Science—is definitely hefty and packed with lots of information. It's a large book (with more than 1,000 pages), and it covers most any computer security topic that might be of interest. section discusses basic security issues at the definitional level. The Policy section addresses the relationship between policy and security, examining several types of policies in the process. Implementation I covers cryptography and its role in security. Implementation II describes how to apply policy requirements in systems. The Assurance section, which Elisabeth Sullivan wrote, introduces assurance basics and formal methods. The Special Topics section discusses malicious logic, vulnerability analysis , auditing, and intrusion detection. Finally, the Practicum ties all the previously discussed material to real-world examples. A ninth additional section, called End Matter, discusses miscellaneous supporting mathematical topics and concludes with an example. At a publisher's list price of US$74.99, you'll want to know why you should consider buying such an expensive book. Several things set it apart from other, similar, offerings. Most importantly , the book provides numerous examples and, refreshingly, definitions. A vertical bar alongside the examples distinguishes them from other text, so picking them out is easy. The book also includes a bibliography of over 1,000 references. Additionally, each chapter includes a summary, suggestions for further reading, research issues, and practice exercises. The format and layout are good, and the fonts are readable. The book is aimed at several audiences , and the preface describes many roadmaps, one of which discusses dependencies among the various chapters. Instructors can use it at the advanced undergraduate level or for introductory graduate-level computer-security courses. The preface also includes a mapping of suggested topics for undergraduate and graduate courses, presuming a certain amount of math and theoretical computer-science background as prerequisites. Practitioners can use the book as a resource for information on specific topics; the examples in the Practicum are ideally suited for them. So, what's the final verdict? Practitioners will want to consider this book as a reference to add to their bookshelves. Teachers of advanced undergraduate or introductory …", "title": "" }, { "docid": "0e1d93bb8b1b2d2e3453384092f39afc", "text": "Repetitive or prolonged head flexion posture while using a smartphone is known as one of risk factors for pain symptoms in the neck. To quantitatively assess the amount and range of head flexion of smartphone users, head forward flexion angle was measured from 18 participants when they were conducing three common smartphone tasks (text messaging, web browsing, video watching) while sitting and standing in a laboratory setting. It was found that participants maintained head flexion of 33-45° (50th percentile angle) from vertical when using the smartphone. The head flexion angle was significantly larger (p < 0.05) for text messaging than for the other tasks, and significantly larger while sitting than while standing. Study results suggest that text messaging, which is one of the most frequently used app categories of smartphone, could be a main contributing factor to the occurrence of neck pain of heavy smartphone users. Practitioner Summary: In this laboratory study, the severity of head flexion of smartphone users was quantitatively evaluated when conducting text messaging, web browsing and video watching while sitting and standing. Study results indicate that text messaging while sitting caused the largest head flexion than that of other task conditions.", "title": "" }, { "docid": "93810beca2ba988e29852cd1bc4b8ab6", "text": "Emotion dysregulation is thought to be critical to the development of negative psychological outcomes. Gross (1998b) conceptualized the timing of regulation strategies as key to this relationship, with response-focused strategies, such as expressive suppression, as less effective and more detrimental compared to antecedent-focused ones, such as cognitive reappraisal. In the current study, we examined the relationship between reappraisal and expressive suppression and measures of psychopathology, particularly for stress-related reactions, in both undergraduate and trauma-exposed community samples of women. Generally, expressive suppression was associated with higher, and reappraisal with lower, self-reported stress-related symptoms. In particular, expressive suppression was associated with PTSD, anxiety, and depression symptoms in the trauma-exposed community sample, with rumination partially mediating this association. Finally, based on factor analysis, expressive suppression and cognitive reappraisal appear to be independent constructs. Overall, expressive suppression, much more so than cognitive reappraisal, may play an important role in the experience of stress-related symptoms. Further, given their independence, there are potentially relevant clinical implications, as interventions that shift one of these emotion regulation strategies may not lead to changes in the other.", "title": "" }, { "docid": "b93455e6b023910bf7711d56d16f62a2", "text": "Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y?—a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries—a flexible but tractable subset of first-order logic—on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.", "title": "" }, { "docid": "5b56288bb7b49f18148f28798cfd8129", "text": "According to World Health Organization (WHO) estimations, one out of five adults worldwide will be obese by 2025. Worldwide obesity has doubled since 1980. In fact, more than 1.9 billion adults (39%) of 18 years and older were overweight and over 600 million (13%) of these were obese in 2014. 42 million children under the age of five were overweight or obese in 2014. Obesity is a top public health problem due to its associated morbidity and mortality. This paper reviews the main techniques to measure the level of obesity and body fat percentage, and explains the complications that can carry to the individual's quality of life, longevity and the significant cost of healthcare systems. Researchers and developers are adapting the existing technology, as intelligent phones or some wearable gadgets to be used for controlling obesity. They include the promoting of healthy eating culture and adopting the physical activity lifestyle. The paper also shows a comprehensive study of the most used mobile applications and Wireless Body Area Networks focused on controlling the obesity and overweight. Finally, this paper proposes an intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight.", "title": "" }, { "docid": "c71a5f23d9d8b9093ca1b2ccdb3d396a", "text": "1 M.Tech. Student 2 Assistant Professor 1,2 Department of Computer Science and Engineering 1,2 Don Bosco Institute of Technology, Affiliated by VTU Abstract— In the recent years Sentiment analysis (SA) has gained momentum by the increase of social networking sites. Sentiment analysis has been an important topic for data mining, social media for classifying reviews and thereby rating the entities such as products, movies etc. This paper represents a comparative study of sentiment classification of lexicon based approach and naive bayes classifier of machine learning in sentiment analysis.", "title": "" }, { "docid": "6fb72f68aa41a71ea51b81806d325561", "text": "An important aspect related to the development of face-aging algorithms is the evaluation of the ability of such algorithms to produce accurate age-progressed faces. In most studies reported in the literature, the performance of face-aging systems is established based either on the judgment of human observers or by using machine-based evaluation methods. In this paper we perform an experimental evaluation that aims to assess the applicability of human-based against typical machine based performance evaluation methods. The results of our experiments indicate that machines can be more accurate in determining the performance of face-aging algorithms. Our work aims towards the development of a complete evaluation framework for age progression methodologies.", "title": "" }, { "docid": "dad1c5e4aa43b9fc2b3592799f9a3a69", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.068 ⇑ Tel.: +886 7 3814526. E-mail address: leechung@mail.ee.kuas.edu.tw Due to the explosive growth of social-media applications, enhancing event-awareness by social mining has become extremely important. The contents of microblogs preserve valuable information associated with past disastrous events and stories. To learn the experiences from past events for tackling emerging real-world events, in this work we utilize the social-media messages to characterize real-world events through mining their contents and extracting essential features for relatedness analysis. On one hand, we established an online clustering approach on Twitter microblogs for detecting emerging events, and meanwhile we performed event relatedness evaluation using an unsupervised clustering approach. On the other hand, we developed a supervised learning model to create extensible measure metrics for offline evaluation of event relatedness. By means of supervised learning, our developed measure metrics are able to compute relatedness of various historical events, allowing the event impacts on specified domains to be quantitatively measured for event comparison. By combining the strengths of both methods, the experimental results showed that the combined framework in our system is sensible for discovering more unknown knowledge about event impacts and enhancing event awareness. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "77f8f90edd85f1af6de8089808153dd7", "text": "Distributed coding is a new paradigm for video compression, based on Slepian and Wolf's and Wyner and Ziv's information-theoretic results from the 1970s. This paper reviews the recent development of practical distributed video coding schemes. Wyner-Ziv coding, i.e., lossy compression with receiver side information, enables low-complexity video encoding where the bulk of the computation is shifted to the decoder. Since the interframe dependence of the video sequence is exploited only at the decoder, an intraframe encoder can be combined with an interframe decoder. The rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding. Wyner-Ziv coding is naturally robust against transmission errors and can be used for joint source-channel coding. A Wyner-Ziv MPEG encoder that protects the video waveform rather than the compressed bit stream achieves graceful degradation under deteriorating channel conditions without a layered signal representation.", "title": "" }, { "docid": "91ef2853e45d9b82f92689e0b01e6d63", "text": "BACKGROUND\nThis study sought to evaluate the efficacy of nonoperative compression in correcting pectus carinatum in children.\n\n\nMATERIALS AND METHODS\nChildren presenting with pectus carinatum between August 1999 and January 2004 were prospectively enrolled in this study. The management protocol included custom compressive bracing, strengthening exercises, and frequent clinical follow-up.\n\n\nRESULTS\nThere were 30 children seen for evaluation. Their mean age was 13 years (range, 3-16 years) and there were 26 boys and 4 girls. Of the 30 original patients, 6 never returned to obtain the brace, leaving 24 patients in the study. Another 4 subjects were lost to follow-up. For the remaining 20 patients who have either completed treatment or continue in the study, the mean duration of bracing was 16 months, involving an average of 3 follow-up visits and 2 brace adjustments. Five of these patients had little or no improvement due to either too short a follow-up or noncompliance with the bracing. The other 15 patients (75%) had a significant to complete correction. There were no complications encountered during the study period.\n\n\nCONCLUSION\nCompressive orthotic bracing is a safe and effective alternative to both invasive surgical correction and no treatment for pectus carinatum in children. Compliance is critical to the success of this management strategy.", "title": "" }, { "docid": "b093976428f2125a7186d5f4b641292c", "text": "CONTEXT\nDehydroepiandrosterone (DHEA) and DHEA sulfate (DHEAS) are the major circulating adrenal steroids and substrates for peripheral sex hormone biosynthesis. In Addison's disease, glucocorticoid and mineralocorticoid deficiencies require lifelong replacement, but the associated near-total failure of DHEA synthesis is not typically corrected.\n\n\nOBJECTIVE AND DESIGN\nIn a double-blind trial, we randomized 106 subjects (44 males, 62 females) with Addison's disease to receive either 50 mg daily of micronized DHEA or placebo orally for 12 months to evaluate its longer-term effects on bone mineral density, body composition, and cognitive function together with well-being and fatigue.\n\n\nRESULTS\nCirculating DHEAS and androstenedione rose significantly in both sexes, with testosterone increasing to low normal levels only in females. DHEA reversed ongoing loss of bone mineral density at the femoral neck (P < 0.05) but not at other sites; DHEA enhanced total body (P = 0.02) and truncal (P = 0.017) lean mass significantly with no change in fat mass. At baseline, subscales of psychological well-being in questionnaires (Short Form-36, General Health Questionnaire-30), were significantly worse in Addison's patients vs. control populations (P < 0.001), and one subscale of SF-36 improved significantly (P = 0.004) after DHEA treatment. There was no significant benefit of DHEA treatment on fatigue or cognitive or sexual function. Supraphysiological DHEAS levels were achieved in some older females who experienced mild androgenic side effects.\n\n\nCONCLUSION\nAlthough further long-term studies of DHEA therapy, with dosage adjustment, are desirable, our results support some beneficial effects of prolonged DHEA treatment in Addison's disease.", "title": "" } ]
scidocsrr
02a2a886f18adcaab576588332df1755
Multi-resolution on compressed sets of clauses
[ { "docid": "6d2adebf7fbdf67b778b60ac69ea5cd3", "text": "In this paper, we propose Zero-Suppressed BDDs (0-Sup-BDDs), which are BDDs based on a new reduction rule. This data structure brings unique and compact representation of sets which appear in many combinatorial problems. Using 0-Sup-BDDs, we can manipulate such sets more simply and efficiently than using original BDDs. We show the properties of 0-Sup-BDDs, their manipulation algorithms, and good applications for LSI CAD systems.", "title": "" }, { "docid": "fcaab6be1862a55036fa360d01b7952d", "text": "The paper presents algorithm directional resolution, a variation on the original DavisPutnam algorithm, and analyzes its worstcase behavior as a function of the topological structure of the theories. The notions of induced width and diversity are shown to play a key role in bounding the complexity of the procedure. The importance of our analysis lies in highlighting structure-based tractable classes of satis ability and in providing theoretical guarantees on the time and space complexity of the algorithm. Contrary to previous assessments, we show that for many theories directional resolution could be an e ective procedure. Our empirical tests con rm theoretical prediction, showing that on problems with special structures, like chains, directional resolution greatly outperforms one of the most e ective satis ability algorithm known to date, namely the popular DavisPutnam procedure.", "title": "" } ]
[ { "docid": "d1114f1ced731a700d40dd97fe62b82b", "text": "Agricultural sector is playing vital role in Indian economy, in which irrigation mechanism is of key concern. Our paper aims to find the exact field condition and to control the wastage of water in the field and to provide exact controlling of field by using the drip irrigation, atomizing the agricultural environment by using the components and building the necessary hardware. For the precisely monitoring and controlling of the agriculture filed, different types of sensors were used. To implement the proposed system ARM LPC2148 Microcontroller is used. The irrigation mechanism is monitored and controlled more efficiently by the proposed system, which is a real time feedback control system. GSM technology is used to inform the end user about the exact field condition. Actually this method of irrigation system has been proposed primarily to save resources, yield of crops and farm profitability.", "title": "" }, { "docid": "f271596a45a3104554bfe975ac8b4d6c", "text": "In many regions of the visual system, the activity of a neuron is normalized by the activity of other neurons in the same region. Here we show that a similar normalization occurs during olfactory processing in the Drosophila antennal lobe. We exploit the orderly anatomy of this circuit to independently manipulate feedforward and lateral input to second-order projection neurons (PNs). Lateral inhibition increases the level of feedforward input needed to drive PNs to saturation, and this normalization scales with the total activity of the olfactory receptor neuron (ORN) population. Increasing total ORN activity also makes PN responses more transient. Strikingly, a model with just two variables (feedforward and total ORN activity) accurately predicts PN odor responses. Finally, we show that discrimination by a linear decoder is facilitated by two complementary transformations: the saturating transformation intrinsic to each processing channel boosts weak signals, while normalization helps equalize responses to different stimuli.", "title": "" }, { "docid": "5fbdeba4f91d31a9a3555109872ff250", "text": "Wepresent new results for the Frank–Wolfemethod (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (and subsequent) iterates. Our results include computational guarantees for both duality/bound gaps and the so-calledFWgaps. Lastly,wepresent complexity bounds in the presence of approximate computation of gradients and/or linear optimization subproblem solutions. Mathematics Subject Classification 90C06 · 90C25 · 65K05", "title": "" }, { "docid": "8c4e7e441a45ec0cccf2e1ce12adfc73", "text": "Purpose – The purpose of this paper is to present a study of knowledge management understanding and usage in small and medium knowledge-intensive enterprises. Design/methodology/approach – The study has taken an interpretitivist approach, using two knowledge-intensive South Yorkshire (England) companies as case studies, both of which are characterised by the need to process and use knowledge on a daily basis in order to remain competitive. The case studies were analysed using qualitative research methodology, composed of interviews and concept mapping, thus deriving a characterisation of understandings, perceptions and requirements of SMEs in relation to knowledge management. Findings – The study provides evidence that, while SMEs, including knowledge intensive ones, acknowledge that adequately capturing, storing, sharing and disseminating knowledge can lead to greater innovation and productivity, their managers are not prepared to invest the relatively high effort on long term knowledge management goals for which they have difficulty in establishing the added value. Thus, knowledge management activities within SMEs tend to happen in an informal way, rarely supported by purposely designed ICT systems. Research limitations/implications – This paper proposes that further studies in this field are required that focus on organisational and practical issues in order to close the gap between theoretical propositions and the reality of practice. Practical implications – The study suggests that in order to implement an appropriate knowledge management strategy in SMEs cultural, behavioural, and organisational issues need to be tackled before even considering technical issues. Originality/value – KM seems to have been successfully applied in large companies, but it is largely disregarded by small and medium sized enterprises (SMEs). This has been attributed primarily to a lack of a formal approach to the sharing, recording, transferring, auditing and exploiting of organisational knowledge, together with a lack of utilisation of available information technologies. This paper debates these concepts from a research findings point of view.", "title": "" }, { "docid": "e0d3a7e7e000c6704518763bf8dff8c8", "text": "Integration of optical communication circuits directly into high-performance microprocessor chips can enable extremely powerful computer systems. A germanium photodetector that can be monolithically integrated with silicon transistor technology is viewed as a key element in connecting chip components with infrared optical signals. Such a device should have the capability to detect very-low-power optical signals at very high speed. Although germanium avalanche photodetectors (APD) using charge amplification close to avalanche breakdown can achieve high gain and thus detect low-power optical signals, they are universally considered to suffer from an intolerably high amplification noise characteristic of germanium. High gain with low excess noise has been demonstrated using a germanium layer only for detection of light signals, with amplification taking place in a separate silicon layer. However, the relatively thick semiconductor layers that are required in such structures limit APD speeds to about 10 GHz, and require excessively high bias voltages of around 25 V (ref. 12). Here we show how nanophotonic and nanoelectronic engineering aimed at shaping optical and electrical fields on the nanometre scale within a germanium amplification layer can overcome the otherwise intrinsically poor noise characteristics, achieving a dramatic reduction of amplification noise by over 70 per cent. By generating strongly non-uniform electric fields, the region of impact ionization in germanium is reduced to just 30 nm, allowing the device to benefit from the noise reduction effects that arise at these small distances. Furthermore, the smallness of the APDs means that a bias voltage of only 1.5 V is required to achieve an avalanche gain of over 10 dB with operational speeds exceeding 30 GHz. Monolithic integration of such a device into computer chips might enable applications beyond computer optical interconnects—in telecommunications, secure quantum key distribution, and subthreshold ultralow-power transistors.", "title": "" }, { "docid": "9c98e4d100c6bc77d18f26234a5a4d59", "text": "The analysis of human motion as a clinical tool can bring many benefits such as the early detection of disease and the monitoring of recovery, so in turn helping people to lead independent lives. However, it is currently under used. Developments in depth cameras, such as Kinect, have opened up the use of motion analysis in settings such as GP surgeries, care homes and private homes. To provide an insight into the use of Kinect in the healthcare domain, we present a review of the current state of the art. We then propose a method that can represent human motions from time-series data of arbitrary length, as a single vector. Finally, we demonstrate the utility of this method by extracting a set of clinically significant features and using them to detect the age related changes in the motions of a set of 54 individuals, with a high degree of certainty (F1-score between 0.9–1.0). Indicating its potential application in the detection of a range of age-related motion impairments.", "title": "" }, { "docid": "3257ce1a07975c21e012e900dc95e746", "text": "In this work, a deep learning approach has been developed to carry out road detection using only LIDAR data. Starting from an unstructured point cloud, top-view images encoding several basic statistics such as mean elevation and density are generated. By considering a top-view representation, road detection is reduced to a single-scale problem that can be addressed with a simple and fast fully convolutional neural network (FCN). The FCN is specifically designed for the task of pixel-wise semantic segmentation by combining a large receptive field with high-resolution feature maps. The proposed system achieved excellent performance and it is among the top-performing algorithms on the KITTI road benchmark. Its fast inference makes it particularly suitable for real-time applications.", "title": "" }, { "docid": "159297c7f6e174923fc169bfb3bc5fe6", "text": "A bewildering variety of devices for communication from humans to computers now exists on the market. In order to make sense of this variety, and to aid in the design of new input devices, we propose a framework for describing and analyzing input devices. Following Mackinlay's semantic analysis of the design space for graphical presentations, our goal is to provide tools for the generation and test of input device designs. The descriptive tools we have created allow us to describe the semantics of a device and measure its expressiveness. Using these tools, we have built a taxonomy of input devices that goes beyond earlier taxonomies of Buxton & Baecker and Foley, Wallace, & Chan. In this paper, we build on these descriptive tools, and proceed to the use of human performance theories and data for evaluation of the effectiveness of points in this design space. We focus on two figures of merit, footprint and bandwidth, to illustrate this evaluation. The result is the systematic integration of methods for both generating and testing the design space of input devices.", "title": "" }, { "docid": "2d78a4c914c844a3f28e8f3b9f65339f", "text": "The availability of abundant data posts a challenge to integrate static customer data and longitudinal behavioral data to improve performance in customer churn prediction. Usually, longitudinal behavioral data are transformed into static data before being included in a prediction model. In this study, a framework with ensemble techniques is presented for customer churn prediction directly using longitudinal behavioral data. A novel approach called the hierarchical multiple kernel support vector machine (H-MK-SVM) is formulated. A three phase training algorithm for the H-MK-SVM is developed, implemented and tested. The H-MK-SVM constructs a classification function by estimating the coefficients of both static and longitudinal behavioral variables in the training process without transformation of the longitudinal behavioral data. The training process of the H-MK-SVM is also a feature selection and time subsequence selection process because the sparse non-zero coefficients correspond to the variables selected. Computational experiments using three real-world databases were conducted. Computational results using multiple criteria measuring performance show that the H-MK-SVM directly using longitudinal behavioral data performs better than currently available classifiers.", "title": "" }, { "docid": "e062d88651a8bdc637ecf57b4cbb1b2b", "text": "Wireless Underground Sensor Networks (WUSNs) consist of wirelessly connected underground sensor nodes that communicate untethered through soil. WUSNs have the potential to impact a wide variety of novel applications including intelligent irrigation, environment monitoring, border patrol, and assisted navigation. Although its deployment is mainly based on underground sensor nodes, a WUSN still requires aboveground devices for data retrieval, management, and relay functionalities. Therefore, the characterization of the bi-directional communication between a buried node and an aboveground device is essential for the realization of WUSNs. In this work, empirical evaluations of underground-to- aboveground (UG2AG) and aboveground-to-underground (AG2UG) communication are presented. More specifically, testbed experiments have been conducted with commodity sensor motes in a real-life agricultural field. The results highlight the asymmetry between UG2AG and AG2UG communication with distinct behaviors for different burial depths. To combat the adverse effects of the change in wavelength in soil, an ultra wideband antenna scheme is deployed, which increases the communication range by more than 350% compared to the original antennas. The results also reveal that a 21% increase in the soil moisture decreases the communication range by more than 70%. To the best of our knowledge, this is the first empirical study that highlights the effects of the antenna design, burial depth, and soil moisture on both UG2AG and AG2UG communication performance. These results have a significant impact on the development of multi-hop networking protocols for WUSNs.", "title": "" }, { "docid": "463eb90754d21c43ee61e7e18256c66b", "text": "A low-profile metamaterial loaded antenna array with anti-interference and polarization reconfigurable features is proposed for base-station communication. Owing to the dual notches etched on the radiating electric dipoles, an impedance bandwidth of 75.6% ranging from 1.68 to 3.72 GHz with a notch band from 2.38 to 2.55 GHz can be achieved. By employing the metamaterial loadings that are arranged in the center of the magnetic dipole, the thickness of the proposed antenna can be decreased from 28 to 20 mm. Furthermore, a serial feeding network that consists of several Wilkinson power dividers and phase shifters is introduced to attain the conversion between dual-linear polarization and triple-circular polarization. Hence, the antenna could meet the demand of the future 5G intelligent application.", "title": "" }, { "docid": "b36e9a2f1143fa242c4d372cb0ba38b3", "text": "Invariance to nuisance transformations is one of the desirable properties of effective representations. We consider transformations that form a group and propose an approach based on kernel methods to derive local group invariant representations. Locality is achieved by defining a suitable probability distribution over the group which in turn induces distributions in the input feature space. We learn a decision function over these distributions by appealing to the powerful framework of kernel methods and generate local invariant random feature maps via kernel approximations. We show uniform convergence bounds for kernel approximation and provide generalization bounds for learning with these features. We evaluate our method on three real datasets, including Rotated MNIST and CIFAR-10, and observe that it outperforms competing kernel based approaches. The proposed method also outperforms deep CNN on RotatedMNIST and performs comparably to the recently proposed group-equivariant CNN.", "title": "" }, { "docid": "9e3263866208bbc6a9019b3c859d2a66", "text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.", "title": "" }, { "docid": "0745755e5347c370cdfbeca44dc6d288", "text": "For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.", "title": "" }, { "docid": "e04e1dc5cd4d0729c661375486884b14", "text": "The Internet of Things (IoT) and the Web are closely related to each other. On the one hand, the Semantic Web has been including vocabularies and semantic models for the Internet of Things. On the other hand, the so-called Web of Things (WoT) advocates architectures relying on established Web technologies and RESTful interfaces for the IoT. In this paper, we present a vocabulary for WoT that aims at defining IoT concepts using terms from the Web. Notably, it includes two concepts identified as the core WoT resources: Thing Description (TD) and Interaction, that have been first elaborated by the W3C interest group for WoT. Our proposal is built upon the ontological pattern Identifier, Resource, Entity (IRE) that was originally designed for the Semantic Web. To better analyze the alignments our proposal allows, we reviewed existing IoT models as a vocabulary graph, complying with the approach of Linked Open Vocabularies (LOV).", "title": "" }, { "docid": "243d1dc8df4b8fbd37cc347a6782a2b5", "text": "This paper introduces a framework for`curious neural controllers' which employ an adaptive world model for goal directed on-line learning. First an on-line reinforcement learning algorithm for autonomousànimats' is described. The algorithm is based on two fully recurrent`self-supervised' continually running networks which learn in parallel. One of the networks learns to represent a complete model of the environmental dynamics and is called thèmodel network'. It provides completècredit assignment paths' into the past for the second network which controls the animats physical actions in a possibly reactive environment. The an-imats goal is to maximize cumulative reinforcement and minimize cumulativèpain'. The algorithm has properties which allow to implement something like the desire to improve the model network's knowledge about the world. This is related to curiosity. It is described how the particular algorithm (as well as similar model-building algorithms) may be augmented by dynamic curiosity and boredom in a natural manner. This may be done by introducing (delayed) reinforcement for actions that increase the model network's knowledge about the world. This in turn requires the model network to model its own ignorance, thus showing a rudimentary form of self-introspective behavior.", "title": "" }, { "docid": "95e2a5dfa0b5e8d8719ae86f17f6d653", "text": "Time series classification is an increasing research topic due to the vast amount of time series data that is being created over a wide variety of fields. The particularity of the data makes it a challenging task and different approaches have been taken, including the distance based approach. 1-NN has been a widely used method within distance based time series classification due to its simplicity but still good performance. However, its supremacy may be attributed to being able to use specific distances for time series within the classification process and not to the classifier itself. With the aim of exploiting these distances within more complex classifiers, new approaches have arisen in the past few years that are competitive or which outperform the 1-NN based approaches. In some cases, these new methods use the distance measure to transform the series into feature vectors, bridging the gap between time series and traditional classifiers. In other cases, the distances are employed to obtain a time series kernel and enable the use of kernel methods for time series classification. One of the main challenges is that a kernel function must be positive semi-definite, a matter that is also addressed within this review. The presented review includes a taxonomy of all those methods that aim to classify time series using a distance based approach, as well as a discussion of the strengths and weaknesses of each method.", "title": "" }, { "docid": "1b9ecdeb1df8eaf7cfef88acbe093d78", "text": "Chemical databases store information in text representations, and the SMILES format is a universal standard used in many cheminformatics so‰ware. Encoded in each SMILES string is structural information that can be used to predict complex chemical properties. In this work, we develop SMILES2vec, a deep RNN that automatically learns features from SMILES to predict chemical properties, without the need for additional explicit feature engineering. Using Bayesian optimization methods to tune the network architecture, we show that an optimized SMILES2vec model can serve as a general-purpose neural network for predicting distinct chemical properties including toxicity, activity, solubility and solvation energy, while also outperforming contemporary MLP neural networks that uses engineered features. Furthermore, we demonstrate proof-of-concept of interpretability by developing an explanation mask that localizes on the most important characters used in making a prediction. When tested on the solubility dataset, it identi€ed speci€c parts of a chemical that is consistent with established €rst-principles knowledge with an accuracy of 88%. Our work demonstrates that neural networks can learn technically accurate chemical concept and provide state-of-the-art accuracy, making interpretable deep neural networks a useful tool of relevance to the chemical industry.", "title": "" }, { "docid": "a769b8f56d699b3f6eca54aeeb314f84", "text": "Assistive mobile robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Within this paper, we present the most recent version of the assistive mobile manipulator EL-E with a focus on the subsystem that enables the robot to retrieve objects from and deliver objects to flat surfaces. Once provided with a 3D location via brief illumination with a laser pointer, the robot autonomously approaches the location and then either grasps the nearest object or places an object. We describe our implementation in detail, while highlighting design principles and themes, including the use of specialized behaviors, task-relevant features, and low-dimensional representations. We also present evaluations of EL-E’s performance relative to common forms of variation. We tested EL-E’s ability to approach and grasp objects from the 25 object categories that were ranked most important for robotic retrieval by motor-impaired patients from the Emory ALS Center. Although reliability varied, EL-E succeeded at least once with objects from 21 out of 25 of these categories. EL-E also approached and grasped a cordless telephone on 12 different surfaces including floors, tables, and counter tops with 100% success. The same test using a vitamin pill (ca. 15mm ×5mm ×5mm) resulted in 58% success.", "title": "" } ]
scidocsrr
fc30a405fa82f60d8d32a4269c39dd5a
Inner engraving for the creation of a balanced LEGO sculpture
[ { "docid": "8689b038c62d96adf1536594fcc95c07", "text": "We present an interactive system that allows users to design original pop-up cards. A pop-up card is an interesting form of papercraft consisting of folded paper that forms a three-dimensional structure when opened. However, it is very difficult for the average person to design pop-up cards from scratch because it is necessary to understand the mechanism and determine the positions of objects so that pop-up parts do not collide with each other or protrude from the card. In the proposed system, the user interactively sets and edits primitives that are predefined in the system. The system simulates folding and opening of the pop-up card using a mass–spring model that can simply simulate the physical movement of the card. This simulation detects collisions and protrusions and illustrates the movement of the pop-up card. The results of the present study reveal that the user can design a wide range of pop-up cards using the proposed system.", "title": "" }, { "docid": "1b314c55b86355e1fd0ef5d5ce9a89ba", "text": "3D printing technology is rapidly maturing and becoming ubiquitous. One of the remaining obstacles to wide-scale adoption is that the object to be printed must fit into the working volume of the 3D printer. We propose a framework, called Chopper, to decompose a large 3D object into smaller parts so that each part fits into the printing volume. These parts can then be assembled to form the original object. We formulate a number of desirable criteria for the partition, including assemblability, having few components, unobtrusiveness of the seams, and structural soundness. Chopper optimizes these criteria and generates a partition either automatically or with user guidance. Our prototype outputs the final decomposed parts with customized connectors on the interfaces. We demonstrate the effectiveness of Chopper on a variety of non-trivial real-world objects.", "title": "" } ]
[ { "docid": "0f96bdaca2e1e0faaa785c59d24e9d5a", "text": "Recent studies indicate that Traditional Chinese medicine (TCM) can play an important role in the whole course of cancer treatment such as recovery stages of post-operative, radiotherapy or chemotherapy stages instead of only terminal stage of cancer. In this review, we have summarized current evidence for using TCM as adjuvant cancer treatment in different stages of cancer lesions. Some TCMs (e.g., TJ-41, Liu-jun-zi-tang, PHY906, Coumarin, and Aescine) are capable of improving the post-operative symptoms such as fatigue, pain, appetite, diarrhea, nausea, vomiting, and lymphedema. Some TCMs (e.g., Ginseng, Huang-Qi, BanZhiLian, TJ-48, Huachansu injection, Shenqi fuzheng injection, and Kanglaite injection) in combination with chemo- or radio-therapy are capable of enhancing the efficacy of and diminishing the side effects and complications caused by chemo- and radiotherapy. Taken together, they have great advantages in terms of suppressing tumor progression, relieving surgery complications, increasing the sensitivity of chemo- and radio- therapeutics, improving an organism's immune system function, and lessening the damage caused by surgery, chemo- or radio-therapeutics. They have significant effects on relieving breast cancer-related lymphedema, reducing cancer-related fatigue and pain, improving radiation pneumonitis and gastrointestinal side effects, protecting liver function, and even ameliorating bone marrow suppression. This review of those medicines should contribute to an understanding of Chinese herbal medicines as an adjunctive therapy in the whole course of cancer treatment instead of only terminal stage of cancer, by providing useful information for development of more effective anti-cancer drugs and making more patients \"survival with cancer\" for a long time.", "title": "" }, { "docid": "c7808ecbca4c5bf8e8093dce4d8f1ea7", "text": "41  Abstract— This project deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. The robot system consists of a Robot body, a control system, a CMOS camera, an accelerometer, a temperature sensor, a ZigBee module. The robot module will be designed with the help of CAD tool. The control system consists of Atmega16 micro controller and Atmel studio IDE. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to have grip of the pipe walls. Unique features of this robot are the caterpillar wheel, the four-bar mechanism supports the well grip of wall, a simple and easy user interface.", "title": "" }, { "docid": "18278db21edaef3446c2bbaa976d88ef", "text": "In the current IoT (Internet of Things) environment, more and more Things: devices, objects, sensors, and everyday items not usually considered computers, are connected to the Internet, and these Things affect and change our social life and economic activities. By using IoTs, service providers can collect and store personal information in the real world, and such providers can gain access to detailed behaviors of the user. Although service providers offer users new services and numerous benefits using their detailed information, most users have concerns about the privacy and security of their personal data. Thus, service providers need to take countermeasures to eliminate those concerns. To help eliminate those concerns, first we conduct a survey regarding users’ privacy and security concerns about IoT services, and then we analyze data collected from the survey using structural equation modeling (SEM). Analysis of the results provide answers to issues of privacy and security concerns to service providers and their users. And we also analyze the effectiveness and effects of personal information management and protection functions in IoT services. key words: IoT (Internet of Things), privacy, security, SEM (Structural Equation Modeling)", "title": "" }, { "docid": "85b1fe5c3d6d68791345d32eda99055b", "text": "Surgery and other invasive therapies are complex interventions, the assessment of which is challenged by factors that depend on operator, team, and setting, such as learning curves, quality variations, and perception of equipoise. We propose recommendations for the assessment of surgery based on a five-stage description of the surgical development process. We also encourage the widespread use of prospective databases and registries. Reports of new techniques should be registered as a professional duty, anonymously if necessary when outcomes are adverse. Case series studies should be replaced by prospective development studies for early technical modifications and by prospective research databases for later pre-trial evaluation. Protocols for these studies should be registered publicly. Statistical process control techniques can be useful in both early and late assessment. Randomised trials should be used whenever possible to investigate efficacy, but adequate pre-trial data are essential to allow power calculations, clarify the definition and indications of the intervention, and develop quality measures. Difficulties in doing randomised clinical trials should be addressed by measures to evaluate learning curves and alleviate equipoise problems. Alternative prospective designs, such as interrupted time series studies, should be used when randomised trials are not feasible. Established procedures should be monitored with prospective databases to analyse outcome variations and to identify late and rare events. Achievement of improved design, conduct, and reporting of surgical research will need concerted action by editors, funders of health care and research, regulatory bodies, and professional societies.", "title": "" }, { "docid": "de703c909703b2dcabf7d99a4b5e1493", "text": "The ultimate goal of this paper is to print radio frequency (RF) and microwave structures using a 3-D platform and to pattern metal films on nonplanar structures. To overcome substrate losses, air core substrates that can readily be printed are utilized. To meet the challenge of patterning conductive layers on complex or nonplanar printed structures, two novel self-aligning patterning processes are demonstrated. One is a simple damascene-like process, and the other is a lift-off process using a 3-D printed lift-off mask layer. A range of microwave and RF circuits are designed and demonstrated between 1 and 8 GHz utilizing these processes. Designs are created and simulated using Keysight Advanced Design System and ANSYS High Frequency Structure Simulator. Circuit designs include a simple microstrip transmission line (T-line), coupled-line bandpass filter, circular ring resonator, T-line resonator, resonant cavity structure, and patch antenna. A commercially available 3-D printer and metal sputtering system are used to realize the designs. Both simulated and measured results of these structures are presented.", "title": "" }, { "docid": "1edf460bcfc83ebc8bd66f2cb51e4a61", "text": "A distributed system with interchangeable constraints for studying skillful human movements via haptic displays is presented. A unified interface provides easy linking of various physical models with spatial constraints, and the graphical contents related to the models as well. Theoretical and experimental kinematic profiles are compared for several cases of basic reaching rest-to-rest tasks: curve-constrained motions, flexible object control, and cooperative two-hand movements. The experimental patterns exhibit the best agreement with the optimal control models based on force-change minimization criteria.", "title": "" }, { "docid": "b37a2f3acae914632d6990df427be2c2", "text": "Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (Arora et al., 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 “discourse atoms” that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory.", "title": "" }, { "docid": "aec560c27d4873674114bd5dd9d64625", "text": "Caches consume a significant amount of energy in modern microprocessors. To design an energy-efficient microprocessor, it is important to optimize cache energy consumption. This paper examines performance and power trade-offs in cache designs and the effectiveness of energy reduction for several novel cache design techniques targeted for low power.", "title": "" }, { "docid": "f456edd4d56dab8f0a60a3cef87f6cdb", "text": "In this paper, we propose Sequential Grouping Networks (SGN) to tackle the problem of object instance segmentation. SGNs employ a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. In particular, the first network aims to group pixels along each image row and column by predicting horizontal and vertical object breakpoints. These breakpoints are then used to create line segments. By exploiting two-directional information, the second network groups horizontal and vertical lines into connected components. Finally, the third network groups the connected components into object instances. Our experiments show that our SGN significantly outperforms state-of-the-art approaches in both, the Cityscapes dataset as well as PASCAL VOC.", "title": "" }, { "docid": "eccfd842c24cf6b87c2b311dc8b29dd3", "text": "Sparse coding exhibits good performance in many computer vision applications. However, due to the overcomplete codebook and the independent coding process, the locality and the similarity among the instances to be encoded are lost. To preserve such locality and similarity information, we propose a Laplacian sparse coding (LSc) framework. By incorporating the similarity preserving term into the objective of sparse coding, our proposed Laplacian sparse coding can alleviate the instability of sparse codes. Furthermore, we propose a Hypergraph Laplacian sparse coding (HLSc), which extends our Laplacian sparse coding to the case where the similarity among the instances defined by a hypergraph. Specifically, this HLSc captures the similarity among the instances within the same hyperedge simultaneously, and also makes the sparse codes of them be similar to each other. Both Laplacian sparse coding and Hypergraph Laplacian sparse coding enhance the robustness of sparse coding. We apply the Laplacian sparse coding to feature quantization in Bag-of-Words image representation, and it outperforms sparse coding and achieves good performance in solving the image classification problem. The Hypergraph Laplacian sparse coding is also successfully used to solve the semi-auto image tagging problem. The good performance of these applications demonstrates the effectiveness of our proposed formulations in locality and similarity preservation.", "title": "" }, { "docid": "3e807b9119bc13c2ffbdc57e79c6523e", "text": "Social network has gained remarkable attention in t he last decade. Accessing social network sites such as Twitter, Fac ebook LinkedIn and Google+ through the internet and the web 2.0 technologies h a become more affordable. People are becoming more interested in and relying o social network for information, news and opinion of other users on div erse subject matters. The heavy reliance on social network sites causes them to gen erate massive data characterised by three computational issues namely; size, noise a nd dynamism. These issues often make social network data very complex to analyse ma nually, resulting in the pertinent use of computational means of analysing t hem. Data mining provides a wide range of techniques for detecting useful knowl edge from massive datasets like trends, patterns and rules [44]. Data mining techni ques are used for information retrieval, statistical modelling and machine learni ng. These techniques employ data pre-processing, data analysis, and data interpretat ion processes in the course of data analysis. This survey discusses different dat a mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are liste d in the Table.1 including the tools employed as well as names of their authors.", "title": "" }, { "docid": "fc2046c92508cb0d6fe2b60c0eb8d2be", "text": "Voting is an inherent process in a democratic society. Other methods for expressing the society participants’ will for example caucuses in US party elections or Landsgemeine in Switzerland can be inconvenient for the citizens and logistically difficult to organize. Furthermore, beyond inconvenience, there may be legitimate reasons for not being able to take part in the voting process, e.g. being deployed overseas in military or being on some other official assignment. Even more, filling in paper ballots and counting them is error-prone and time-consuming process. A well-known controversy took place during US presidental election in 2000 [Florida recount 2000], when a partial recount of the votes could have changed the outcome of the elections. As the recount was cancelled by the court, the actual result was not never known. Decline in elections’ participation rate has been observed in many old democracies [Summers 2016] and it should be the decision-makers goal to bring the electorate back to the polling booths. One way to do that would be to use internet voting. In this method, the ballots are cast using a personal computer or a smart phone and it sent over the internet to the election committee. However, there have been several critics against the internet voting methods [Springall et al. 2014]. In this report we consider, how to make internet voting protocols more secure by using blockchain.", "title": "" }, { "docid": "b5ee9aa4463d313c9f22e085af4fe541", "text": "A comprehensive first visit with a gynecologist can lay the groundwork for positive health outcomes throughout a female adolescent's life. This visit gives the clinician the opportunity to gauge both the physical and psychosocial health and development of the adolescent patient. Physical screening should be combined with an assessment of the patient's environment and risk behaviors along with counseling on healthy behavior for both the patient and her parent or guardian.", "title": "" }, { "docid": "a5255efa61de43a3341473facb4be170", "text": "Differentiation of 3T3-L1 preadipocytes can be induced by a 2-d treatment with a factor \"cocktail\" (DIM) containing the synthetic glucocorticoid dexamethasone (dex), insulin, the phosphodiesterase inhibitor methylisobutylxanthine (IBMX) and fetal bovine serum (FBS). We temporally uncoupled the activities of the four DIM components and found that treatment with dex for 48 h followed by IBMX treatment for 48 h was sufficient for adipogenesis, whereas treatment with IBMX followed by dex failed to induce significant differentiation. Similar results were obtained with C3H10T1/2 and primary mesenchymal stem cells. The 3T3-L1 adipocytes differentiated by sequential treatment with dex and IBMX displayed insulin sensitivity equivalent to DIM adipocytes, but had lower sensitivity to ISO-stimulated lipolysis and reduced triglyceride content. The nondifferentiating IBMX-then-dex treatment produced transient expression of adipogenic transcriptional regulatory factors C/EBPbeta and C/EBPdelta, and little induction of terminal differentiation factors C/EBPalpha and PPARgamma. Moreover, the adipogenesis inhibitor preadipocyte factor-1 (Pref-1) was repressed by DIM or by dex-then-IBMX, but not by IBMX-then-dex treatment. We conclude that glucocorticoids drive preadipocytes to a novel intermediate cellular state, the dex-primed preadipocyte, during adipogenesis in cell culture, and that Pref-1 repression may be a cell fate determinant in preadipocytes.", "title": "" }, { "docid": "d566e25ed5ff6e479887a350572cadad", "text": "Lorentz reciprocity is a fundamental characteristic of the vast majority of electronic and photonic structures. However, non-reciprocal components such as isolators, circulators and gyrators enable new applications ranging from radio frequencies to optical frequencies, including full-duplex wireless communication and on-chip all-optical information processing. Such components today dominantly rely on the phenomenon of Faraday rotation in magneto-optic materials. However, they are typically bulky, expensive and not suitable for insertion in a conventional integrated circuit. Here we demonstrate magnetic-free linear passive non-reciprocity based on the concept of staggered commutation. Commutation is a form of parametric modulation with very high modulation ratio. We observe that staggered commutation enables time-reversal symmetry breaking within very small dimensions (λ/1,250 × λ/1,250 in our device), resulting in a miniature radio-frequency circulator that exhibits reduced implementation complexity, very low loss, strong non-reciprocity, significantly enhanced linearity and real-time reconfigurability, and is integrated in a conventional complementary metal-oxide-semiconductor integrated circuit for the first time.", "title": "" }, { "docid": "76786db6457b02b2361a8b46e0955431", "text": "Studies of learning, and in particular perceptual learning, have focused on learning of stimuli consisting of a single sensory modality. However, our experience in the world involves constant multisensory stimulation. For instance, visual and auditory information are integrated in performing many tasks that involve localizing and tracking moving objects. Therefore, it is likely that the human brain has evolved to develop, learn and operate optimally in multisensory environments. We suggest that training protocols that employ unisensory stimulus regimes do not engage multisensory learning mechanisms and, therefore, might not be optimal for learning. However, multisensory-training protocols can better approximate natural settings and are more effective for learning.", "title": "" }, { "docid": "b55fa34c0a969e93c3a02edccf4d9dcd", "text": "This paper describes the Flexible Navigation system that extends the ROS Navigation stack and compatible libraries to separate computation from decision making, and integrates the system with FlexBE — the Flexible Behavior Engine, which provides intuitive supervision with adjustable autonomy. Although the ROS Navigation plugin model offers some customization, many decisions are internal to move_base. In contrast, the Flexible Navigation system separates global planning from local planning and control, and uses a hierarchical finite state machine to coordinate behaviors. The Flexible Navigation system includes Python-based state implementations and ROS nodes derived from the move_base plugin model to provide compatibility with existing libraries as well as future extensibility. The paper concludes with complete system demonstrations in both simulation and hardware using the iRobot Create and Kobuki-based Turtlebot running under ROS Kinetic. The system supports multiple independent robots.", "title": "" }, { "docid": "f3115abc9b159be833560ee5276c06b7", "text": "This paper describes a strategy on learning from time series data and on using learned model for forecasting. Time series forecasting, which analyzes and predicts a variable changing over time, has received much attention due to its use for forecasting stock prices, but it can also be used for pattern recognition and data mining. Our method for learning from time series data consists of detecting patterns within the data, describing the detected patterns, clustering the patterns, and creating a model to describe the data. It uses a change-point detection method to partition a time series into segments, each of the segments is then described by an autoregressive model. Then, it partitions all the segments into clusters, each of the clusters is considered as a state for a Markov model. It then creates the transitions between states in the Markov model based on the transitions between segments as the time series progressing. Our method for using the learned model for forecasting consists of indentifying current state, forecasting trends, and adapting to changes. It uses a moving window to monitor real-time data and creates an autoregressive model for the recently observed data, which is then matched to a state of the learned Markov model. Following the transitions of the model, it forecasts future trends. It also continues to monitor real-time data and makes corrections if necessary for adapting to changes. We implemented and successfully tested the methods for an application of load balancing on a parallel computing system.", "title": "" }, { "docid": "f75ae6fedddde345109d33499853256d", "text": "Deaths due to prescription and illicit opioid overdose have been rising at an alarming rate, particularly in the USA. Although naloxone injection is a safe and effective treatment for opioid overdose, it is frequently unavailable in a timely manner due to legal and practical restrictions on its use by laypeople. As a result, an effort spanning decades has resulted in the development of strategies to make naloxone available for layperson or \"take-home\" use. This has included the development of naloxone formulations that are easier to administer for nonmedical users, such as intranasal and autoinjector intramuscular delivery systems, efforts to distribute naloxone to potentially high-impact categories of nonmedical users, as well as efforts to reduce regulatory barriers to more widespread distribution and use. Here we review the historical and current literature on the efficacy and safety of naloxone for use by nonmedical persons, provide an evidence-based discussion of the controversies regarding the safety and efficacy of different formulations of take-home naloxone, and assess the status of current efforts to increase its public distribution. Take-home naloxone is safe and effective for the treatment of opioid overdose when administered by laypeople in a community setting, shortening the time to reversal of opioid toxicity and reducing opioid-related deaths. Complementary strategies have together shown promise for increased dissemination of take-home naloxone, including 1) provision of education and training; 2) distribution to critical populations such as persons with opioid addiction, family members, and first responders; 3) reduction of prescribing barriers to access; and 4) reduction of legal recrimination fears as barriers to use. Although there has been considerable progress in decreasing the regulatory and legal barriers to effective implementation of community naloxone programs, significant barriers still exist, and much work remains to be done to integrate these programs into efforts to provide effective treatment of opioid use disorders.", "title": "" }, { "docid": "2a3a494684865cfdd5cc9e86670d1a1a", "text": "Human engagement in narrative is partially driven by reasoning about discourse relations between narrative events, and the expectations about what is likely to happen next that results from such reasoning. Researchers in NLP have tackled modeling such expectations from a range of perspectives, including treating it as the inference of the CONTINGENT discourse relation, or as a type of common-sense causal reasoning. Our approach is to model likelihood between events by drawing on several of these lines of previous work. We implement and evaluate different unsupervised methods for learning event pairs that are likely to be CONTINGENT on one another. We refine event pairs that we learn from a corpus of film scene descriptions utilizing web search counts, and evaluate our results by collecting human judgments of contingency. Our results indicate that the use of web search counts increases the average accuracy of our best method to 85.64% over a baseline of 50%, as compared to an average accuracy of 75.15% without web search.", "title": "" } ]
scidocsrr
a20126e82770da6018c1c10d310c0bcb
Vision and Learning for Deliberative Monocular Cluttered Flight
[ { "docid": "e0f4670762f2df2b6e9af3d86ec62e2b", "text": "We address the task of pixel-level hand detection in the context of ego-centric cameras. Extracting hand regions in ego-centric videos is a critical step for understanding hand-object manipulation and analyzing hand-eye coordination. However, in contrast to traditional applications of hand detection, such as gesture interfaces or sign-language recognition, ego-centric videos present new challenges such as rapid changes in illuminations, significant camera motion and complex hand-object manipulations. To quantify the challenges and performance in this new domain, we present a fully labeled indoor/outdoor ego-centric hand detection benchmark dataset containing over 200 million labeled pixels, which contains hand images taken under various illumination conditions. Using both our dataset and a publicly available ego-centric indoors dataset, we give extensive analysis of detection performance using a wide range of local appearance features. Our analysis highlights the effectiveness of sparse features and the importance of modeling global illumination. We propose a modeling strategy based on our findings and show that our model outperforms several baseline approaches.", "title": "" }, { "docid": "fe3ccdc73ef42cebdc602544e4279825", "text": "Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent state-of-the-art imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors.", "title": "" } ]
[ { "docid": "4eafe7f60154fa2bed78530735a08878", "text": "Although Android's permission system is intended to allow users to make informed decisions about their privacy, it is often ineffective at conveying meaningful, useful information on how a user's privacy might be impacted by using an application. We present an alternate approach to providing users the knowledge needed to make informed decisions about the applications they install. First, we create a knowledge base of mappings between API calls and fine-grained privacy-related behaviors. We then use this knowledge base to produce, through static analysis, high-level behavior profiles of application behavior. We have analyzed almost 80,000 applications to date and have made the resulting behavior profiles available both through an Android application and online. Nearly 1500 users have used this application to date. Based on 2782 pieces of application-specific feedback, we analyze users' opinions about how applications affect their privacy and demonstrate that these profiles have had a substantial impact on their understanding of those applications. We also show the benefit of these profiles in understanding large-scale trends in how applications behave and the implications for user privacy.", "title": "" }, { "docid": "a531694dba7fc479b43d0725bc68de15", "text": "This paper gives an introduction to the essential challenges of software engineering and requirements that software has to fulfill in the domain of automation. Besides, the functional characteristics, specific constraints and circumstances are considered for deriving requirements concerning usability, the technical process, the automation functions, used platform and the well-established models, which are described in detail. On the other hand, challenges result from the circumstances at different points in the single phases of the life cycle of the automated system. The requirements for life-cycle-management, tools and the changeability during runtime are described in detail.", "title": "" }, { "docid": "df94e8f3c2cef683db432e3e767fe913", "text": "The design and manufacture of present-day CPUs causes inherent variation in supercomputer architectures such as variation in power and temperature of the chips. The variation also manifests itself as frequency differences among processors under Turbo Boost dynamic overclocking. This variation can lead to unpredictable and suboptimal performance in tightly coupled HPC applications. In this study, we use compute-intensive kernels and applications to analyze the variation among processors in four top supercomputers: Edison, Cab, Stampede, and Blue Waters. We observe that there is an execution time difference of up to 16% among processors on the Turbo Boost-enabled supercomputers: Edison, Cab, Stampede. There is less than 1% variation on Blue Waters, which does not have a dynamic overclocking feature. We analyze measurements from temperature and power instrumentation and find that intrinsic differences in the chips' power efficiency is the culprit behind the frequency variation. Moreover, we analyze potential solutions such as disabling Turbo Boost, leaving idle cores and replacing slow chips to mitigate the variation. We also propose a speed-aware dynamic task redistribution (load balancing) algorithm to reduce the negative effects of performance variation. Our speed-aware load balancing algorithm improves the performance up to 18% compared to no load balancing performance and 6% better than the non-speed aware counterpart.", "title": "" }, { "docid": "73bce3659cd0cd7b76a5bb6f8eda5523", "text": "Ransomware is a specific type of malware that threatens the victim’s access to her data unless a ransom is paid. It is also known as a cryptovirus due to its method of operation. Typically, ransomware encrypts the contents of the victim’s hard drive thereby rendering it inaccessible to the victim. Upon payment of the ransom, the decryption key is released to the victim. This means of attack is therefore also sometimes aptly called cryptoviral extortion. The ransomware itself is delivered to the victim using several channels. The most common channel of delivery is by masquerading the malware as a Trojan horse via an email attachment. In this work, we study a high-profile example of a ransomware called the WannaCry worm. This ransomware is particularly malicious since it has the ability to traverse computing equipment on a network without any human intervention. Since this worm has had a large scale impact, we find it imperative and instructive to better understand the inner workings of this high-profile ransomware. To this end, we obtain a sample of WannaCry and dissect it completely using advanced static and dynamic malware analysis techniques. This effort, we hope, will shed light on the inner workings of the malware and will enable cyber security experts to better thwart similar attacks in the future by: a) generating appropriate signatures and b) developing stronger defense solutions. Our analysis is conducted in a Win32 environment and we present our detailed analysis so as to enable reproduction of our work by other malware analysts. This, we hope, will further advancement in generating appropriate signatures to detect the worm. Secondly, we present a prototype software that will enable a user to prevent this malware from unleashing its payload and protect the user on a Win32 environment in an effort to advance the development of efficient software defense mechanisms to protect users from such a worm attack in the future. Keywords—Ransomware, cryptovirus, extortion, static and dynamic analysis, malware analysis, cyber security.", "title": "" }, { "docid": "66d6f514c6bce09110780a1130b64dfe", "text": "Today, with more competiveness of industries, markets, and working atmosphere in productive and service organizations what is very important for maintaining clients present, for attracting new clients and as a result increasing growth of success in organizations is having a suitable relation with clients. Bank is among organizations which are not an exception. Especially, at the moment according to increasing rate of banks` privatization, it can be argued that significance of attracting clients for banks is more than every time. The article tries to investigate effect of CRM on marketing performance in banking industry. The research method is applied and survey and descriptive. Statistical community of the research is 5 branches from Mellat Banks across Khoramabad Province and their clients. There are 45 personnel in this branch and according to Morgan Table the sample size was 40 people. Clients example was considered according to collected information, one questionnaire was designed for bank organization and another one was prepared for banks` clients in which reliability and validity are approved. The research result indicates that CRM is ineffective on marketing performance.", "title": "" }, { "docid": "32fcdb98d3c022262ddc487db5e4d27f", "text": "Music recommendation is receiving increasing attention as the music industry develops venues to deliver music over the Internet. The goal of music recommendation is to present users lists of songs that they are likely to enjoy. Collaborative-filtering and content-based recommendations are two widely used approaches that have been proposed for music recommendation. However, both approaches have their own disadvantages: collaborative-filtering methods need a large collection of user history data and content-based methods lack the ability of understanding the interests and preferences of users. To overcome these limitations, this paper presents a novel dynamic music similarity measurement strategy that utilizes both content features and user access patterns. The seamless integration of them significantly improves the music similarity measurement accuracy and performance. Based on this strategy, recommended songs are obtained by a means of label propagation over a graph representing music similarity. Experimental results on a real data set collected from http://www.newwisdom.net demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "c8f10cc90546fe5ffc7ccaabf5d9ccca", "text": "The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.", "title": "" }, { "docid": "b823d427f74963372fc7015a047cb90e", "text": "Most of the previous sparse coding (SC) based super resolution (SR) methods partition the image into overlapped patches, and process each patch separately. These methods, however, ignore the consistency of pixels in overlapped patches, which is a strong constraint for image reconstruction. In this paper, we propose a convolutional sparse coding (CSC) based SR (CSC-SR) method to address the consistency issue. Our CSC-SR involves three groups of parameters to be learned: (i) a set of filters to decompose the low resolution (LR) image into LR sparse feature maps, (ii) a mapping function to predict the high resolution (HR) feature maps from the LR ones, and (iii) a set of filters to reconstruct the HR images from the predicted HR feature maps via simple convolution operations. By working directly on the whole image, the proposed CSC-SR algorithm does not need to divide the image into overlapped patches, and can exploit the image global correlation to produce more robust reconstruction of image local structures. Experimental results clearly validate the advantages of CSC over patch based SC in SR application. Compared with state-of-the-art SR methods, the proposed CSC-SR method achieves highly competitive PSNR results, while demonstrating better edge and texture preservation performance.", "title": "" }, { "docid": "e0ec89c103aedb1d04fbc5892df288a8", "text": "This paper compares the computational performances of four model order reduction methods applied to large-scale electric power RLC networks transfer functions with many resonant peaks. Two of these methods require the state-space or descriptor model of the system, while the third requires only its frequency response data. The fourth method is proposed in this paper, being a combination of two of the previous methods. The methods were assessed for their ability to reduce eight test systems, either of the single-input single-output (SISO) or multiple-input multiple-output (MIMO) type. The results indicate that the reduced models obtained, of much smaller dimension, reproduce the dynamic behaviors of the original test systems over an ample range of frequencies with high accuracy.", "title": "" }, { "docid": "9e32c4fed9c9aecfba909fd82287336b", "text": "StructuredQueryLanguage injection (SQLi) attack is a code injection techniquewherehackers injectSQLcommandsintoadatabaseviaavulnerablewebapplication.InjectedSQLcommandscan modifytheback-endSQLdatabaseandthuscompromisethesecurityofawebapplication.Inthe previouspublications,theauthorhasproposedaNeuralNetwork(NN)-basedmodelfordetections andclassificationsof theSQLiattacks.Theproposedmodelwasbuiltfromthreeelements:1)a UniformResourceLocator(URL)generator,2)aURLclassifier,and3)aNNmodel.Theproposed modelwas successful to:1)detect eachgeneratedURLaseitherabenignURLoramalicious, and2)identifythetypeofSQLiattackforeachmaliciousURL.Thepublishedresultsprovedthe effectivenessoftheproposal.Inthispaper,theauthorre-evaluatestheperformanceoftheproposal throughtwoscenariosusingcontroversialdatasets.Theresultsoftheexperimentsarepresentedin ordertodemonstratetheeffectivenessoftheproposedmodelintermsofaccuracy,true-positiverate aswellasfalse-positiverate. KeyWoRDS Artificial Intelligence, Databases, Intrusion Detection, Machine Learning, Neural Networks, SQL Injection Attacks, Web Attacks", "title": "" }, { "docid": "26f2c0ca1d9fc041a6b04f0644e642dd", "text": "During the last years, in particular due to the Digital Humanities, empirical processes, data capturing or data analysis got more and more popular as part of humanities research. In this paper, we want to show that even the complete scientific method of natural science can be applied in the humanities. By applying the scientific method to the humanities, certain kinds of problems can be solved in a confirmable and replicable manner. In particular, we will argue that patterns may be perceived as the analogon to formulas in natural science. This may provide a new way of representing solution-oriented knowledge in the humanities. Keywords-pattern; pattern languages; digital humanities;", "title": "" }, { "docid": "d79a1a6398e98855ddd1181c141d7b00", "text": "In this paper we describe a new binarisation method designed specifically for OCR of low quality camera images: Background Surface Thresholding or BST. This method is robust to lighting variations and produces images with very little noise and consistent stroke width. BST computes a ”surface” of background intensities at every point in the image and performs adaptive thresholding based on this result. The surface is estimated by identifying regions of lowresolution text and interpolating neighbouring background intensities into these regions. The final threshold is a combination of this surface and a global offset. According to our evaluation BST produces considerably fewer OCR errors than Niblack’s local average method while also being more runtime efficient.", "title": "" }, { "docid": "3c5c2076baab2cc4f9775c84f0e382b1", "text": "Accurate event analysis in real time is of paramount importance for high-fidelity situational awareness such that proper actions can take place before any isolated faults escalate to cascading blackouts. Existing approaches are limited to detect only single or double events or a specified event type. Although some previous works can well distinguish multiple events in small-scale systems, the performance tends to degrade dramatically in large-scale systems. In this paper, we focus on multiple event detection, recognition, and temporal localization in large-scale power systems. We discover that there always exist “regions” where the reaction of all buses to certain event within each region demonstrates high degree similarity, and that the boundary of the “regions” generally remains the same regardless of the type of event(s). We further verify that, within each region, this reaction to multiple events can be approximated as a linear combination of reactions to each constituent event. Based on these findings, we propose a novel method, referred to as cluster-based sparse coding (CSC), to extract all the underlying single events involved in a multievent scenario. Multiple events of three typical disturbances (e.g., generator trip, line trip, and load shedding) can be detected and recognized. Specifically, the CSC algorithm can effectively distinguish line trip events from oscillation, which has been a very challenging task for event analysis. Experimental results based on simulated large-scale system model (i.e., NPCC) show that the proposed CSC algorithm presents high detection and recognition rate with low false alarms.", "title": "" }, { "docid": "1865a404c970d191ed55e7509b21fb9e", "text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1", "title": "" }, { "docid": "04609d7cd9809e16f8dc81cc142b42ec", "text": "Cloud computing provides a lot of shareable resources payable on demand to the users. The drawback with cloud computing is the security challenges since the data in the cloud are managed by third party. Steganography and cryptography are some of the security measures applied in the cloud to secure user data. The objective of steganography is to hide the existence of communication from the unintended users whereas cryptography does provide security to user data to be transferred in the cloud. Since users pay for the services utilize in the cloud, the need to evaluate the performance of the algorithms used in the cloud to secure user data in order to know the resource consumed by such algorithms such as storage memory, network bandwidth, computing power, encryption and decryption time becomes imperative. In this work, we implemented and evaluated the performance of Text steganography and RSA algorithm and Image steganography and RSA as Digital signature considering four test cases. The simulation results show that, image steganography with RSA as digital signature performs better than text steganography and RSA algorithm. The performance differences between the two algorithms are 10.76, 9.93, 10.53 and 10.53 seconds for encryption time, 60.68, 40.94, 40.9, and 41.85 seconds for decryption time, 8.1, 10.92, 15.2 and 5.17 mb for memory used when hiding data, 5.3, 1.95 and 17.18 mb for memory used when extracting data, 0.93, 1.04, 1.36 and 3.76 mb for bandwidth used, 75.75, 36.2, 36.9 and 37.45 kwh for processing power used when hiding and extracting data respectively. Except in test case2 where Text steganography and RSA algorithm perform better than Image Steganography and RSA as Digital Signature in terms of memory used when extracting data with performance difference of -5.09 mb because of the bit size of the image data when extracted. This research work recommend the use of image steganography and RSA as digital signature to cloud service providers and users since it can secure major data types such as text, image, audio and video used in the cloud and consume less system resources.", "title": "" }, { "docid": "170e7a72a160951e880f18295d100430", "text": "In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are used to construct capsules in the first capsule layer. Capsule layers are connected via dynamic routing mechanism. The last capsule layer consists of only one capsule to produce a vector output. The length of this vector output is used to measure the plausibility of the triple. Our proposed CapsE obtains state-of-the-art link prediction results for knowledge graph completion on two benchmark datasets: WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17 dataset.", "title": "" }, { "docid": "9b656d1ae57b43bb2ccf2d971e46eae3", "text": "On the one hand, enterprises manufacturing any kinds of goods require agile production technology to be able to fully accommodate their customers’ demand for flexibility. On the other hand, Smart Objects, such as networked intelligent machines or tagged raw materials, exhibit ever increasing capabilities, up to the point where they offer their smart behaviour as web services. The two trends towards higher flexibility and more capable objects will lead to a service-oriented infrastructure where complex processes will span over all types of systems — from the backend enterprise system down to the Smart Objects. To fully support this, we present SOCRADES, an integration architecture that can serve the requirements of future manufacturing. SOCRADES provides generic components upon which sophisticated production processes can be modelled. In this paper we in particular give a list of requirements, the design, and the reference implementation of that integration architecture.", "title": "" }, { "docid": "10318d39b3ad18779accbf29b2f00fcd", "text": "Designing convolutional neural networks (CNN) models for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant effort has been dedicated to design and improve mobile models on all three dimensions, it is challenging to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated neural architecture search approach for designing resourceconstrained mobile CNN models. We propose to explicitly incorporate latency information into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike in previous work, where mobile latency is considered via another, often inaccurate proxy (e.g., FLOPS), in our experiments, we directly measure real-world inference latency by executing the model on a particular platform, e.g., Pixel phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that permits layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our model achieves 74.0% top-1 accuracy with 76ms latency on a Pixel phone, which is 1.5× faster than MobileNetV2 (Sandler et al. 2018) and 2.4× faster than NASNet (Zoph et al. 2018) with the same top-1 accuracy. On the COCO object detection task, our model family achieves both higher mAP quality and lower latency than MobileNets.", "title": "" }, { "docid": "bee5aa9a7feedadf07597c9a58d95928", "text": "One of the key differences between the learning mechanism of humans and Artificial Neural Networks (ANNs) is the ability of humans to learn one task at a time. ANNs, on the other hand, can only learn multiple tasks simultaneously. Any attempts at learning new tasks incrementally cause them to completely forget about previous tasks. This lack of ability to learn incrementally, called Catastrophic Forgetting, is considered a major hurdle in building a true AI system. In this paper, our goal is to isolate the truly effective existing ideas for incremental learning from those that only work under certain conditions. To this end, we first thoroughly analyze the current state of the art (iCaRL) method for incremental learning and demonstrate that the good performance of the system is not because of the reasons presented in the existing literature. We conclude that the success of iCaRL is primarily due to knowledge distillation and recognize a key limitation of knowledge distillation, i.e, it often leads to bias in classifiers. Finally, we propose a dynamic threshold moving algorithm that is able to successfully remove this bias. We demonstrate the effectiveness of our algorithm on CIFAR100 and MNIST datasets showing near-optimal results. Our implementation is available at : https://github.com/Khurramjaved96/ incremental-learning.", "title": "" }, { "docid": "acb0f1e123cb686b4aeab418f380bd79", "text": "Surface parameterization is necessary for many graphics tasks: texture-preserving simplification, remeshing, surface painting, and precomputation of solid textures. The stretch caused by a given parameterization determines the sampling rate on the surface. In this article, we present an automatic parameterization method for segmenting a surface into patches that are then flattened with little stretch.\n Many objects consist of regions of relatively simple shapes, each of which has a natural parameterization. Based on this observation, we describe a three-stage feature-based patch creation method for manifold surfaces. The first two stages, genus reduction and feature identification, are performed with the help of distance-based surface functions. In the last stage, we create one or two patches for each feature region based on a covariance matrix of the feature's surface points.\n To reduce stretch during patch unfolding, we notice that stretch is a 2 × 2 tensor, which in ideal situations is the identity. Therefore, we use the <i>Green-Lagrange tensor</i> to measure and to guide the optimization process. Furthermore, we allow the boundary vertices of a patch to be optimized by adding <i>scaffold triangles</i>. We demonstrate our feature-based patch creation and patch unfolding methods for several textured models.\n Finally, to evaluate the quality of a given parameterization, we describe an image-based error measure that takes into account stretch, seams, smoothness, packing efficiency, and surface visibility.", "title": "" } ]
scidocsrr
8d6345ae1dbe14185089ee6bb06dc57f
Learning from Examples as an Inverse Problem
[ { "docid": "f51a854a390be7d6980b49aea2e955cf", "text": "The purpose of this paper is to provide a PAC error analysis for the q-norm soft margin classifier, a support vector machine classification algorithm. It consists of two parts: regularization error and sample error. While many techniques are available for treating the sample error, much less is known for the regularization error and the corresponding approximation error for reproducing kernel Hilbert spaces. We are mainly concerned about the regularization error. It is estimated for general distributions by a K-functional in weighted L spaces. For weakly separable distributions (i.e., the margin may be zero) satisfactory convergence rates are provided by means of separating functions. A projection operator is introduced, which leads to better sample error estimates especially for small complexity kernels. The misclassification error is bounded by the V -risk associated with a general class of loss functions V . The difficulty of bounding the offset is overcome. Polynomial kernels and Gaussian kernels are used to demonstrate the main results. The choice of the regularization parameter plays an important role in our analysis.", "title": "" } ]
[ { "docid": "19607c362f07ebe0238e5940fefdf03f", "text": "This paper presents an approach for generating photorealistic video sequences of dynamically varying facial expressions in human-agent interactions. To this end, we study human-human interactions to model the relationship and influence of one individual's facial expressions in the reaction of the other. We introduce a two level optimization of generative adversarial models, wherein the first stage generates a dynamically varying sequence of the agent's face sketch conditioned on facial expression features derived from the interacting human partner. This serves as an intermediate representation, which is used to condition a second stage generative model to synthesize high-quality video of the agent face. Our approach uses a novel L1 regularization term computed from layer features of the discriminator, which are integrated with the generator objective in the GAN model. Session constraints are also imposed on video frame generation to ensure appearance consistency between consecutive frames. We demonstrated that our model is effective at generating visually compelling facial expressions. Moreover, we quantitatively showed that agent facial expressions in the generated video clips reflect valid emotional reactions to behavior of the human partner.", "title": "" }, { "docid": "57a23f68303a3694e4e6ba66e36f7015", "text": "OBJECTIVE\nTwo studies using cross-sectional designs explored four possible mechanisms by which loneliness may have deleterious effects on health: health behaviors, cardiovascular activation, cortisol levels, and sleep.\n\n\nMETHODS\nIn Study 1, we assessed autonomic activity, salivary cortisol levels, sleep quality, and health behaviors in 89 undergraduate students selected based on pretests to be among the top or bottom quintile in feelings of loneliness. In Study 2, we assessed blood pressure, heart rate, salivary cortisol levels, sleep quality, and health behaviors in 25 older adults whose loneliness was assessed at the time of testing at their residence.\n\n\nRESULTS\nTotal peripheral resistance was higher in lonely than nonlonely participants, whereas cardiac contractility, heart rate, and cardiac output were higher in nonlonely than lonely participants. Lonely individuals also reported poorer sleep than nonlonely individuals. Study 2 indicated greater age-related increases in blood pressure and poorer sleep quality in lonely than nonlonely older adults. Mean salivary cortisol levels and health behaviors did not differ between groups in either study.\n\n\nCONCLUSIONS\nResults point to two potentially orthogonal predisease mechanisms that warrant special attention: cardiovascular activation and sleep dysfunction. Health behavior and cortisol regulation, however, may require more sensitive measures and large sample sizes to discern their roles in loneliness and health.", "title": "" }, { "docid": "e4892dfe4da663c4044a78a8892010a8", "text": "Turkey has been undertaking many projects to integrate Information and Communication Technology (ICT) sources into practice in the teaching-learning process in educational institutions. This research study sheds light on the use of ICT tools in primary schools in the social studies subject area, by considering various variables which affect the success of the implementation of the use of these tools. A survey was completed by 326 teachers who teach fourth and fifth grade at primary level. The results showed that although teachers are willing to use ICT resources and are aware of the existing potential, they are facing problems in relation to accessibility to ICT resources and lack of in-service training opportunities.", "title": "" }, { "docid": "0f2caa9b91c2c180cbfbfcc25941f78e", "text": "BACKGROUND\nSevere mitral annular calcification causing degenerative mitral stenosis (DMS) is increasingly encountered in patients undergoing mitral and aortic valve interventions. However, its clinical profile and natural history and the factors affecting survival remain poorly characterized. The goal of this study was to characterize the factors affecting survival in patients with DMS.\n\n\nMETHODS\nAn institutional echocardiographic database was searched for patients with DMS, defined as severe mitral annular calcification without commissural fusion and a mean transmitral diastolic gradient of ≥2 mm Hg. This resulted in a cohort of 1,004 patients. Survival was analyzed as a function of clinical, pharmacologic, and echocardiographic variables.\n\n\nRESULTS\nThe patient characteristics were as follows: mean age, 73 ± 14 years; 73% women; coronary artery disease in 49%; and diabetes mellitus in 50%. The 1- and 5-year survival rates were 78% and 47%, respectively, and were slightly worse with higher DMS grades (P = .02). Risk factors for higher mortality included greater age (P < .0001), atrial fibrillation (P = .0009), renal insufficiency (P = .004), mitral regurgitation (P < .0001), tricuspid regurgitation (P < .0001), elevated right atrial pressure (P < .0001), concomitant aortic stenosis (P = .02), and low serum albumin level (P < .0001). Adjusted for propensity scores, use of renin-angiotensin system blockers (P = .02) or statins (P = .04) was associated with better survival, and use of digoxin was associated with higher mortality (P = .007).\n\n\nCONCLUSIONS\nPrognosis in patients with DMS is poor, being worse in the aged and those with renal insufficiency, atrial fibrillation, and other concomitant valvular lesions. Renin-angiotensin system blockers and statins may confer a survival benefit, and digoxin use may be associated with higher mortality in these patients.", "title": "" }, { "docid": "073ea28d4922c2d9c1ef7945ce4aa9e2", "text": "The three major solutions for increasing the nominal performance of a CPU are: multiplying the number of cores per socket, expanding the embedded cache memories and use multi-threading to reduce the impact of the deep memory hierarchy. Systems with tens or hundreds of hardware threads, all sharing a cache coherent UMA or NUMA memory space, are today the de-facto standard. While these solutions can easily provide benefits in a multi-program environment, they require recoding of applications to leverage the available parallelism. Threads must synchronize and exchange data, and the overall performance is heavily in influenced by the overhead added by these mechanisms, especially as developers try to exploit finer grain parallelism to be able to use all available resources.", "title": "" }, { "docid": "3913e29aab9b4447edfd4f34a16c38ed", "text": "This review compares the biological and physiological function of Sigma receptors [σRs] and their potential therapeutic roles. Sigma receptors are widespread in the central nervous system and across multiple peripheral tissues. σRs consist of sigma receptor one (σ1R) and sigma receptor two (σ2R) and are expressed in numerous regions of the brain. The sigma receptor was originally proposed as a subtype of opioid receptors and was suggested to contribute to the delusions and psychoses induced by benzomorphans such as SKF-10047 and pentazocine. Later studies confirmed that σRs are non-opioid receptors (not an µ opioid receptor) and play a more diverse role in intracellular signaling, apoptosis and metabolic regulation. σ1Rs are intracellular receptors acting as chaperone proteins that modulate Ca2+ signaling through the IP3 receptor. They dynamically translocate inside cells, hence are transmembrane proteins. The σ1R receptor, at the mitochondrial-associated endoplasmic reticulum membrane, is responsible for mitochondrial metabolic regulation and promotes mitochondrial energy depletion and apoptosis. Studies have demonstrated that they play a role as a modulator of ion channels (K+ channels; N-methyl-d-aspartate receptors [NMDAR]; inositol 1,3,5 triphosphate receptors) and regulate lipid transport and metabolism, neuritogenesis, cellular differentiation and myelination in the brain. σ1R modulation of Ca2+ release, modulation of cardiac myocyte contractility and may have links to G-proteins. It has been proposed that σ1Rs are intracellular signal transduction amplifiers. This review of the literature examines the mechanism of action of the σRs, their interaction with neurotransmitters, pharmacology, location and adverse effects mediated through them.", "title": "" }, { "docid": "a33f862d0b7dfde7b9f18aa193db9acf", "text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor  awais.shakoor22@gmail.com Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).", "title": "" }, { "docid": "e8ff6978cae740152a918284ebe49fe3", "text": "Cross-lingual sentiment classification aims to predict the sentiment orientation of a text in a language (named as the target language) with the help of the resources from another language (named as the source language). However, current cross-lingual performance is normally far away from satisfaction due to the huge difference in linguistic expression and social culture. In this paper, we suggest to perform active learning for cross-lingual sentiment classification, where only a small scale of samples are actively selected and manually annotated to achieve reasonable performance in a short time for the target language. The challenge therein is that there are normally much more labeled samples in the source language than those in the target language. This makes the small amount of labeled samples from the target language flooded in the aboundance of labeled samples from the source language, which largely reduces their impact on cross-lingual sentiment classification. To address this issue, we propose a data quality controlling approach in the source language to select high-quality samples from the source language. Specifically, we propose two kinds of data quality measurements, intraand extra-quality measurements, from the certainty and similarity perspectives. Empirical studies verify the appropriateness of our active learning approach to cross-lingual sentiment classification.", "title": "" }, { "docid": "01be341cfcfe218896c795d769c66e69", "text": "This letter proposes a multi-user uplink channel estimation scheme for mmWave massive MIMO over frequency selective fading (FSF) channels. Specifically, by exploiting the angle-domain structured sparsity of mmWave FSF channels, a distributed compressive sensing-based channel estimation scheme is proposed. Moreover, by using the grid matching pursuit strategy with adaptive measurement matrix, the proposed algorithm can solve the power leakage problem caused by the continuous angles of arrival or departure. Simulation results verify the good performance of the proposed solution.", "title": "" }, { "docid": "045162dbad88cd4d341eed216779bb9b", "text": "BACKGROUND\nCrocodile oil and its products are used as ointments for burns and scalds in traditional medicines. A new ointment formulation - crocodile oil burn ointment (COBO) was developed to provide more efficient wound healing activity. The purpose of the study was to evaluate the burn healing efficacy of this new formulation by employing deep second-degree burns in a Wistar rat model. The analgesic and anti-inflammatory activities of COBO were also studied to provide some evidences for its further use.\n\n\nMATERIALS AND METHODS\nThe wound healing potential of this formulation was evaluated by employing a deep second-degree burn rat model and the efficiency was comparatively assessed against a reference ointment - (1% wt/wt) silver sulfadiazine (SSD). After 28 days, the animals were euthanized and the wounds were removed for transversal and longitudinal histological studies. Acetic acid-induced writhing in mice was used to evaluate the analgesic activity and its anti-inflammatory activity was observed in xylene -induced edema in mice.\n\n\nRESULTS\nCOBO enhanced the burn wound healing (20.5±1.3 d) as indicated by significant decrease in wound closure time compared with the burn control (25.0±2.16 d) (P<0.01). Hair follicles played an importance role in the physiological functions of the skin, and their growth in the wound could be revealed for the skin regeneration situation. Histological results showed that the hair follicles were well-distributed in the post-burn skin of COBO treatment group, and the amounts of total, active, primary and secondary hair follicles in post-burn 28-day skin of COBO treatment groups were more than those in burn control and SSD groups. On the other hand, the analgesic and anti-inflammatory activity of COBO were much better than those of control group, while they were very close to those of moist exposed burn ointment (MEBO).\n\n\nCONCLUSIONS\nCOBO accelerated wound closure, reduced inflammation, and had analgesic effects compared with SSD in deep second degree rat burn model. These findings suggest that COBO would be a potential therapy for treating human burns. Abbreviations: COBO, crocodile oil burn ointment; SSD, silver sulfadiazine; MEBO, moist exposed burn ointment; TCM, traditional Chinese medicine; CHM, Chinese herbal medicine; GC-MS, gas chromatography-mass spectrometry.", "title": "" }, { "docid": "162bfca981e89b1b3174a030ad8f64c6", "text": "This paper addresses the consensus problem of multiagent systems with a time-invariant communication topology consisting of general linear node dynamics. A distributed observer-type consensus protocol based on relative output measurements is proposed. A new framework is introduced to address in a unified way the consensus of multiagent systems and the synchronization of complex networks. Under this framework, the consensus of multiagent systems with a communication topology having a spanning tree can be cast into the stability of a set of matrices of the same low dimension. The notion of consensus region is then introduced and analyzed. It is shown that there exists an observer-type protocol solving the consensus problem and meanwhile yielding an unbounded consensus region if and only if each agent is both stabilizable and detectable. A multistep consensus protocol design procedure is further presented. The consensus with respect to a time-varying state and the robustness of the consensus protocol to external disturbances are finally discussed. The effectiveness of the theoretical results is demonstrated through numerical simulations, with an application to low-Earth-orbit satellite formation flying.", "title": "" }, { "docid": "0f5ad4bd916a0115215adc938d46bf2c", "text": "We propose a new paradigm to effortlessly get a portable geometric Level Of Details (LOD) for a point cloud inside a Point Cloud Server. The point cloud is divided into groups of points (patch), then each patch is reordered (MidOc ordering) so that reading points following this order provides more and more details on the patch. This LOD have then multiple applications: point cloud size reduction for visualisation (point cloud streaming) or speeding of slow algorithm, fast density peak detection and correction as well as safeguard for methods that may be sensible to density variations. The LOD method also embeds information about the sensed object geometric nature, and thus can be used as a crude multi-scale dimensionality descriptor, enabling fast classification and on-the-fly filtering for basic classes.", "title": "" }, { "docid": "dedef832d8b54cac137277afe9cd27eb", "text": "The number of strands to minimize loss in a litz-wire transformer winding is determined. With fine stranding, the ac resistance factor decreases, but dc resistance increases because insulation occupies more of the window area. A power law to model insulation thickness is combined with standard analysis of proximity-effect losses.", "title": "" }, { "docid": "228cd0696e0da6f18a22aa72f009f520", "text": "Modern Convolutional Neural Networks (CNN) are extremely powerful on a range of computer vision tasks. However, their performance may degrade when the data is characterised by large intra-class variability caused by spatial transformations. The Spatial Transformer Network (STN) is currently the method of choice for providing CNNs the ability to remove those transformations and improve performance in an end-to-end learning framework. In this paper, we propose Densely Fused Spatial Transformer Network (DeSTNet), which, to our best knowledge, is the first dense fusion pattern for combining multiple STNs. Specifically, we show how changing the connectivity pattern of multiple STNs from sequential to dense leads to more powerful alignment modules. Extensive experiments on three benchmarks namely, MNIST, GTSRB, and IDocDB show that the proposed technique outperforms related state-of-the-art methods (i.e., STNs and CSTNs) both in terms of accuracy and robustness.", "title": "" }, { "docid": "f3090b5de9f3f1c29f261a2ef86bac61", "text": "The K-means algorithm is a popular data-clustering algorithm. However, one of its drawbacks is the requirement for the number of clusters, K, to be specified before the algorithm is applied. This paper first reviews existing methods for selecting the number of clusters for the algorithm. Factors that affect this selection are then discussed and a new measure to assist the selection is proposed. The paper concludes with an analysis of the results of using the proposed measure to determine the number of clusters for the K-means algorithm for different data sets.", "title": "" }, { "docid": "e870f2fe9a26b241bdeca882b6186169", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.", "title": "" }, { "docid": "a6f1480f52d142a013bb88a92e47b0d7", "text": "An isolated switched high step up boost DC-DC converter is discussed in this paper. The main objective of this paper is to step up low voltage to very high voltage. This paper mainly initiates at boosting a 30V DC into 240V DC. The discussed converter benefits from the continuous input current. Usually, step-up DC-DC converters are suitable for input whose voltage level is very low. The circuital design comprises of four main stages. Firstly, an impedance network which is used to boost the low input voltage. Secondly a switching network which is used to boost the input voltage then an isolation transformer which is used to provide higher boosting ability and finally a voltage multiplier rectifier which is used to rectify the secondary voltage of the transformer. No switching deadtime is required, which increases the reliability of the converter. Comparing with the existing step-up topologies indicates that this new design is hybrid, portable, higher power density and the size of the whole system is also reduced. The principles as well as operations were analysed and experimentally worked out, which provides a higher efficiency. KeywordImpedance Network, Switching Network, Isolation Transformer, Voltage Multiplier Rectifier, MicroController, DC-DC Boost Converter __________________________________________________________________________________________________", "title": "" }, { "docid": "1c7251c55cf0daea9891c8a522bbd3ec", "text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.", "title": "" }, { "docid": "1ad6efaaf4e3201d59c62cd3dbcc01a6", "text": "•Combine Bayesian change point detection with Gaussian Processes to define a nonstationary time series model. •Central aim is to react to underlying regime changes in an online manner. •Able to integrate out all latent variables and optimize hyperparameters sequentially. •Explore three alternative ways of augmenting GP models to handle nonstationarity (GPTS, ARGPCP and NSGP – see below). •A Bayesian approach (BOCPD) for online change point detection was introduced in [1]. •BOCPD introduces a latent variable representing the run length at time t and adapts predictions via integrating out the run length. •BOCPD has two key ingredients: –Any model which can construct a predictive density for future observations, in particular, p(xt|x(t−τ ):(t−1), θm), i.e., the “underlying predictive model” (UPM). –A hazard function H(r|θh) which encodes our prior belief in a change point occuring after observing a run length r.", "title": "" }, { "docid": "cc05dca89bf1e3f53cf7995e547ac238", "text": "Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic image labelling. Our paper has several contributions: We show how random forests can be augmented with structured label information and be used to deliver structured low-level predictions. The learning task is carried out by employing a novel split function evaluation criterion that exploits the joint distribution observed in the structured label space. This allows the forest to learn typical label transitions between object classes and avoid locally implausible label configurations. We provide two approaches for integrating the structured output predictions obtained at a local level from the forest into a concise, global, semantic labelling. We integrate our new ideas also in the Hough-forest framework with the view of exploiting contextual information at the classification level to improve the performance on the task of object detection. Finally, we provide experimental evidence for the effectiveness of our approach on different tasks: Semantic image labelling on the challenging MSRCv2 and CamVid databases, reconstruction of occluded handwritten Chinese characters on the Kaist database and pedestrian detection on the TU Darmstadt databases.", "title": "" } ]
scidocsrr
76f0513df0e14762b4da085193cc7d1f
Enterprise Architecture as Enabler of Organizational Agility - A Municipality Case Study
[ { "docid": "bb5cca7f3d3a7ddcfb6455f3e2cc94a6", "text": "Many organizations have adopted an Enterprise Architecture (EA) approach because of the potential benefits resulting from a more standardized and coordinated approach to systems development and management, and because of the tighter alignment of business and information technology in support of business strategy execution. At the same time, experience shows that having an effective EA practice is easier said than done and the coordination and implementation efforts can be daunting. While nobody disputes the potential benefits of well architected systems, there is no empirical evidence showing whether the organizational benefits of EA outweigh the coordination and management costs associated with the architecting process. Furthermore, most practitioners we have interviewed can provide technical metrics for internal EA efficiency and effectiveness, but none of our participants were able to provide concrete metrics or evidence about the bottom line impact that EA has on the organization as a whole. In this article we raise key issues associated with the evaluation of the organizational impact of EA and propose a framework for empirical research in this area.", "title": "" }, { "docid": "fad4ff82e9b11f28a70749d04dfbf8ca", "text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Enterprise architecture (EA) is the definition and representation of a high-level view of an enterprise's business processes and IT systems, their interrelationships, and the extent to which these processes and systems are shared by different parts of the enterprise. EA aims to define a suitable operating platform to support an organisation's future goals and the roadmap for moving towards this vision. Despite significant practitioner interest in the domain, understanding the value of EA remains a challenge. Although many studies make EA benefit claims, the explanations of why and how EA leads to these benefits are fragmented, incomplete, and not grounded in theory. This article aims to address this knowledge gap by focusing on the question: How does EA lead to organisational benefits? Through a careful review of EA literature, the paper consolidates the fragmented knowledge on EA benefits and presents the EA Benefits Model (EABM). The EABM proposes that EA leads to organisational benefits through its impact on four benefit enablers: Organisational Alignment, Information Availability, Resource Portfolio Optimisation, and Resource Complementarity. The article concludes with a discussion of a number of potential avenues for future research, which could build on the findings of this study.", "title": "" } ]
[ { "docid": "70d4545496bfd3b68e092d0ce11be299", "text": "This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.", "title": "" }, { "docid": "3cc9f615445f3692aa258300d73f57ff", "text": "In good old-fashioned artificial intelligence (GOFAI), humans specified systems that solved problems. Much of the recent progress in AI has come from replacing human insights by learning. However, learning itself is still usually built by humans – specifically the choice that parameter updates should follow the gradient of a cost function. Yet, in analogy with GOFAI, there is no reason to believe that humans are particularly good at defining such learning systems: we may expect learning itself to be better if we learn it. Recent research in machine learning has started to realize the benefits of that strategy. We should thus expect this to be relevant for neuroscience: how could the correct learning rules be acquired? Indeed, behavioral science has long shown that humans learn-to-learn, which is potentially responsible for their impressive learning abilities. Here we discuss ideas across machine learning, neuroscience, and behavioral science that matter for the principle of learning-to-learn.", "title": "" }, { "docid": "4d52c27f623fdf083d2a5bddb4dfaade", "text": "The Iron Man media franchise glorifies futuristic interfaces and devices like holographic screens, powerful mobile devices, and heads-up displays. Consequently, a mainstream audience has come to know about and discursively relate to Augmented Reality (AR) technology through fan participation. This paper identifies how Iron Man fans reveal the belief that technology sensationalized in the films and comics may actually become real. Using humanities theories and methods, it argues for a new way to explore potential users' expectations for augmented reality. HCI as a field needs to broaden its focus and attend to fans in terms of their future as consumers and users.", "title": "" }, { "docid": "46ad960f5fe60635c6d556105b5e3607", "text": "The authors explored the utility of the Difficulties in Emotion Regulation Scale (DERS) in assessing adolescents' emotion regulation. Adolescents (11-17 years; N = 870) completed the DERS and measures of externalizing and internalizing problems. Confirmatory factor analysis suggested a similar factor structure in the adolescent sample of the authors as demonstrated previously among adults. Furthermore, results indicated no gender bias in ratings of DERS factors on three scales (as evidenced by strong factorial gender invariance) and limited gender bias on the other three scales (as evidenced by metric invariance). Female adolescents scored higher on four of six DERS factors than male adolescents. DERS factors were meaningfully related to adolescents' externalizing and internalizing problems. Results suggest that scores on the DERS show promising internal consistency and validity in a community sample of adolescents.", "title": "" }, { "docid": "ca985aa9f64536c339a365b5218ce61f", "text": "Dependency network measures capture various facets of the dependencies among software modules. For example, betweenness centrality measures how much information flows through a module compared to the rest of the network. Prior studies have shown that these measures are good predictors of post-release failures. However, these studies did not explore the causes for such good performance and did not provide guidance for practitioners to avoid future bugs. In this paper, we closely examine the causes for such performance by replicating prior studies using data from the Eclipse project. Our study shows that a small subset of dependency network measures have a large impact on post-release failure, while other network measures have a very limited impact. We also analyze the benefit of bug prediction in reducing testing cost. Finally, we explore the practical implications of the important network measures.", "title": "" }, { "docid": "8a7bd0858a51380ed002b43b08a1c9f1", "text": "Unbiased language is a requirement for reference sources like encyclopedias and scientific texts. Bias is, nonetheless, ubiquitous, making it crucial to understand its nature and linguistic realization and hence detect bias automatically. To this end we analyze real instances of human edits designed to remove bias from Wikipedia articles. The analysis uncovers two classes of bias: framing bias, such as praising or perspective-specific words, which we link to the literature on subjectivity; and epistemological bias, related to whether propositions that are presupposed or entailed in the text are uncontroversially accepted as true. We identify common linguistic cues for these classes, including factive verbs, implicatives, hedges, and subjective intensifiers. These insights help us develop features for a model to solve a new prediction task of practical importance: given a biased sentence, identify the bias-inducing word. Our linguistically-informed model performs almost as well as humans tested on the same task.", "title": "" }, { "docid": "45d57f01218522609d6ef93de61ea491", "text": "We consider the problem of finding a ranking of a set of elements that is “closest to” a given set of input rankings of the elements; more precisely, we want to find a permutation that minimizes the Kendall-tau distance to the input rankings, where the Kendall-tau distance is defined as the sum over all input rankings of the number of pairs of elements that are in a different order in the input ranking than in the output ranking. If the input rankings are permutations, this problem is known as the Kemeny rank aggregation problem. This problem arises for example in building meta-search engines for Web search, aggregating viewers’ rankings of movies, or giving recommendations to a user based on several different criteria, where we can think of having one ranking of the alternatives for each criterion. Many of the approximation algorithms and heuristics that have been proposed in the literature are either positional, comparison sort or local search algorithms. The rank aggregation problem is a special case of the (weighted) feedback arc set problem, but in the feedback arc set problem we use only information about the preferred relative ordering of pairs of elements to find a ranking of the elements, whereas in the case of the rank aggregation problem, we have additional information in the form of the complete input rankings. The positional methods are the only algorithms that use this additional information. Since the rank aggregation problem is NP-hard, none of these algorithms is guaranteed to find the optimal solution, and different algorithms will provide different solutions. We give theoretical and practical evidence that a combination of these different approaches gives algorithms that are superior to the individual algorithms. Theoretically, we give lower bounds on the performance for many of the “pure” methods. Practically, we perform an extensive evaluation of the “pure” algorithms and ∗Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. frans@mail.tsinghua.edu.cn. Research performed in part while the author was at Nature Source Genetics, Ithaca, NY. †Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. anke@mail.tsinghua.edu.cn. Research partly supported by NSF grant CCF-0514628 and performed in part while the author was at the School of Operations Research and Information Engineering at Cornell University, Ithaca, NY. combinations of different approaches. We give three recommendations for which (combination of) methods to use based on whether a user wants to have a very fast, fast or reasonably fast algorithm.", "title": "" }, { "docid": "fd62cb306e6e39e7ead79696591746b2", "text": "Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase)-based approaches should perform better than the term-based ones, but many experiments do not support this hypothesis. This paper presents an innovative and effective pattern discovery technique which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance.", "title": "" }, { "docid": "b845aaa999c1ed9d99cb9e75dff11429", "text": "We present a new space-efficient approach, (SparseDTW ), to compute the Dynamic Time Warping (DTW ) distance between two time series that always yields the optimal result. This is in contrast to other known approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series. The more the similarity between the time series the less space required to compute the DTW between them. To the best of our knowledge, all other techniques to speedup DTW, impose apriori constraints and do not exploit similarity characteristics that may be present in the data. We conduct experiments and demonstrate that SparseDTW outperforms previous approaches.", "title": "" }, { "docid": "3181171d92ce0a8d3a44dba980c0cc5f", "text": "Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.", "title": "" }, { "docid": "cd176e795fe52784e27a1c001979709b", "text": "[Purpose] The purpose of this study was to identify the influence of relaxation exercises for the masticator muscles on the limited ROM and pain of temporomandibular joint dysfunction (TMD). [Subjects and Methods] The subjects were 10 men and 31 women in their 20s and 30s. They were randomly divided into no treatment, active exercises and relaxation exercise for the masticator muscle groups. The exercise groups performed exercises three times or more a day over a period of four weeks, performing exercise for 10 minutes each time. Before and after the four weeks, all the subjects were measured for ROM, deviation, occlusion, and pain in the temporomandibular joint. [Results] ROM, deviation and pain showed statistically significant in improvements after the intervention in the active exercise and relaxation exercise for the masticator muscle groups. Deviation also showed a statistically significant difference between the active exercise and relaxation exercise groups. [Conclusion] The results verify that as with active exercises, relaxation exercises for the masticatory muscles are an effective treatment for ROM and pain in TMD. Particularly, masticatory muscle relaxation exercises were found to be a treatment that is also effective for deviation.", "title": "" }, { "docid": "08765f109452855227eb85395e4c49b1", "text": "and on their differing feelings toward the politicians (in this case, across liking, trusting, and feeling affiliated with the candidates). After 16 test runs, the voters did indeed change their attitudes and feelings toward the candidates in different and yet generally realistic ways, and even changed their attitudes about other issues based on what a candidate extolled.", "title": "" }, { "docid": "1331dc5705d4b416054341519126f32f", "text": "There is a large tradition of work in moral psychology that explores the capacity for moral judgment by focusing on the basic capacity to distinguish moral violations (e.g. hitting another person) from conventional violations (e.g. playing with your food). However, only recently have there been attempts to characterize the cognitive mechanisms underlying moral judgment (e.g. Cognition 57 (1995) 1; Ethics 103 (1993) 337). Recent evidence indicates that affect plays a crucial role in mediating the capacity to draw the moral/conventional distinction. However, the prevailing account of the role of affect in moral judgment is problematic. This paper argues that the capacity to draw the moral/conventional distinction depends on both a body of information about which actions are prohibited (a Normative Theory) and an affective mechanism. This account leads to the prediction that other normative prohibitions that are connected to an affective mechanism might be treated as non-conventional. An experiment is presented that indicates that \"disgust\" violations (e.g. spitting at the table), are distinguished from conventional violations along the same dimensions as moral violations.", "title": "" }, { "docid": "fb84f9d8a88c3afd5e3eb2f290989b72", "text": "With higher reliability requirements in clusters and data centers, RAID-6 has gained popularity due to its capability to tolerate concurrent failures of any two disks, which has been shown to be of increasing importance in large scale storage systems. Among various implementations of erasure codes in RAID-6, a typical set of codes known as Maximum Distance Separable (MDS) codes aim to offer data protection against disk failures with optimal storage efficiency. However, because of the limitation of horizontal parity or diagonal/anti-diagonal parities used in MDS codes, storage systems based on RAID-6 suffers from unbalanced I/O and thus low performance and reliability. To address this issue, in this paper, we propose a new parity called Horizontal-Diagonal Parity (HDP), which takes advantages of both horizontal and diagonal/anti-diagonal parities. The corresponding MDS code, called HDP code, distributes parity elements uniformly in each disk to balance the I/O workloads. HDP also achieves high reliability via speeding up the recovery under single or double disk failure. Our analysis shows that HDP provides better balanced I/O and higher reliability compared to other popular MDS codes.", "title": "" }, { "docid": "a11f1155f3a9805f7c17284c99eed109", "text": "This paper presents the architecture and design of a high-performance asynchronous Huffman decoder for compressed-code embedded processors. In such processors, embedded programs are stored in compressed form in instruction ROM, then are decompressed on demand during instruction cache refill. The Huffman decoder is used as a code decompression engine. The circuit is non-pipelined, and is implemented as an iterative self-timed ring. It achieves a high-speed decode rate with very low area overhead. Simulations using Lsim show an average throughput of 32 bits/25 ns on the output side (or 163 MBytes/sec, or 1303 Mbit/sec), corresponding to about 889 Mbit/sec on the input side. The area of the design is extremely small: under 1 mm in a 0.8 micron fullcustom layout. The decoder is estimated to have higher throughput than any comparable synchronous Huffman decoder (after normalizing for feature size and voltage), yet is much smaller than synchronous designs. Its performance is also 83% faster than a recently published asynchronous Huffman decoder using the same technology.", "title": "" }, { "docid": "90c2121fc04c0c8d9c4e3d8ee7b8ecc0", "text": "Measuring similarity between two data objects is a more challenging problem for data mining and knowledge discovery tasks. The traditional clustering algorithms have been mainly stressed on numerical data, the implicit property of which can be exploited to define distance function between the data points to define similarity measure. The problem of similarity becomes more complex when the data is categorical which do not have a natural ordering of values or can be called as non geometrical attributes. Clustering on relational data sets when majority of its attributes are of categorical types makes interesting facts. No earlier work has been done on clustering categorical attributes of relational data set types making use of the property of functional dependency as parameter to measure similarity. This paper is an extension of earlier work on clustering relational data sets where domains are unique and similarity is context based and introduces a new notion of similarity based on dependency of an attribute on other attributes prevalent in the relational data set. This paper also gives a brief overview of popular similarity measures of categorical attributes. This novel similarity measure can be used to apply on tuples and their respective values. The important property of categorical domain is that they have smaller number of attribute values. The similarity measure of relational data sets then can be applied to the smaller data sets for efficient results.", "title": "" }, { "docid": "00357ea4ef85efe5cd2080e064ddcd06", "text": "The cumulative match curve (CMC) is used as a measure of 1: m identification system performance. It judges the ranking capabilities of an identification system. The receiver operating characteristic curve (ROC curve) of a verification system, on the other hand, expresses the quality of a 1:1 matcher. The ROC plots the false accept rate (FAR) of a 1:1 matcher versus the false reject rate (FRR) of the matcher. We show that the CMC is also related to the FAR and FRR of a 1:1 matcher, i.e., the matcher that is used to rank the candidates by sorting the scores. This has as a consequence that when a 1:1 matcher is used for identification, that is, for sorting match scores from high to low, the CMC does not offer any additional information beyond the FAR and FRR curves. The CMC is just another way of displaying the data and can be computed from the FAR and FRR.", "title": "" }, { "docid": "212f128450a141b5b4c83c8c57d14677", "text": "Local Authority road networks commonly include roads with different functional characteristics and a variety of construction types, which require maintenance solutions tailored to their needs. Given this background, on local road network, pavement management is founded on the experience of the agency engineers and is often constrained by low budgets and a variety of environmental and external requirements. This paper forms part of a research work that investigates the use of digital techniques for obtaining field data in order to increase safety and reduce labour cost requirements using a semi-automated distress collection and measurement system. More specifically, a definition of a distress detection procedure is presented which aims at producing a result complying more closely to the distress identification manuals and protocols. The process comprises the following two steps: Automated pavement image collection. Images are collected using the high speed digital acquisition system of the Mobile Laboratory designed and implemented by the Department of Civil and Environmental Engineering of the University of Catania; Distress Detection. By way of the Pavement Distress Analyser (PDA), a specialised software, images are adjusted to eliminate their optical distortion. Cracks, potholes and patching are automatically detected and subsequently classified by means of an operator assisted approach. An intense, experimental field survey has made it possible to establish that the procedure obtains more consistent distress measurements than a manual survey thus increasing its repeatability, reducing costs and increasing safety during the survey. Moreover, the pilot study made it possible to validate results coming from a survey carried out under normal traffic conditions, concluding that it is feasible to integrate the procedure into a roadway pavement management system.", "title": "" }, { "docid": "9d30cfbc7d254882e92cad01f5bd17c7", "text": "Data from culture studies have revealed that Enterococcus faecalis is occasionally isolated from primary endodontic infections but frequently recovered from treatment failures. This molecular study was undertaken to investigate the prevalence of E. faecalis in endodontic infections and to determine whether this species is associated with particular forms of periradicular diseases. Samples were taken from cases of untreated teeth with asymptomatic chronic periradicular lesions, acute apical periodontitis, or acute periradicular abscesses, and from root-filled teeth associated with asymptomatic chronic periradicular lesions. DNA was extracted from the samples, and a 16S rDNA-based nested polymerase chain reaction assay was used to identify E. faecalis. This species occurred in seven of 21 root canals associated with asymptomatic chronic periradicular lesions, in one of 10 root canals associated with acute apical periodontitis, and in one of 19 pus samples aspirated from acute periradicular abscesses. Statistical analysis showed that E. faecalis was significantly more associated with asymptomatic cases than with symptomatic ones. E. faecalis was detected in 20 of 30 cases of persistent endodontic infections associated with root-filled teeth. When comparing the frequencies of this species in 30 cases of persistent infections with 50 cases of primary infections, statistical analysis demonstrated that E. faecalis was strongly associated with persistent infections. The average odds of detecting E. faecalis in cases of persistent infections associated with treatment failure were 9.1. The results of this study indicated that E. faecalis is significantly more associated with asymptomatic cases of primary endodontic infections than with symptomatic ones. Furthermore, E. faecalis was much more likely to be found in cases of failed endodontic therapy than in primary infections.", "title": "" }, { "docid": "b81f831c1152bb6a8812ad800324a6cd", "text": "Measures of semantic similarity between concepts are widely used in Natural Language Processing. In this article, we show how six existing domain-independent measures can be adapted to the biomedical domain. These measures were originally based on WordNet, an English lexical database of concepts and relations. In this research, we adapt these measures to the SNOMED-CT ontology of medical concepts. The measures include two path-based measures, and three measures that augment path-based measures with information content statistics from corpora. We also derive a context vector measure based on medical corpora that can be used as a measure of semantic relatedness. These six measures are evaluated against a newly created test bed of 30 medical concept pairs scored by three physicians and nine medical coders. We find that the medical coders and physicians differ in their ratings, and that the context vector measure correlates most closely with the physicians, while the path-based measures and one of the information content measures correlates most closely with the medical coders. We conclude that there is a role both for more flexible measures of relatedness based on information derived from corpora, as well as for measures that rely on existing ontological structures.", "title": "" } ]
scidocsrr
520d498214af3ad777f1619b2db3e586
Eye-Blink Detection Using Facial Landmarks
[ { "docid": "773b5914dce6770b2db707ff4536c7f6", "text": "This paper presents an automatic drowsy driver monitoring and accident prevention system that is based on monitoring the changes in the eye blink duration. Our proposed method detects visual changes in eye locations using the proposed horizontal symmetry feature of the eyes. Our new method detects eye blinks via a standard webcam in real-time at 110fps for a 320×240 resolution. Experimental results in the JZU [3] eye-blink database showed that the proposed system detects eye blinks with a 94% accuracy with a 1% false positive rate.", "title": "" }, { "docid": "575febd59eeb276d4714428093299c8e", "text": "A new eye blink detection algorithm is proposed. It is based on analyzing the variance of the vertical component of motion vectors in the eye region. Face and eyes are detected with Viola – Jones type algorithm. Next, a grid of points is placed over the eye regions and tracked with a KLT tracker. Eye regions are divided into 3×3 cells. For each cell an average motion vector is estimated from motion vectors of the individual tracked points. Simple state machines are setup to analyse these variances for each eye. The solution is this way more robust and with a lower false positive rate compared to other methods based on tracking. We achieve the best results on the Talking face dataset (mean accuracy 99%) and state-of-the-art results on the ZJU dataset.", "title": "" }, { "docid": "9096c5bfe44df6dc32641b8f5370d8d0", "text": "This paper presents a nonintrusive prototype computer vision system for monitoring a driver's vigilance in real time. It is based on a hardware system for the real-time acquisition of a driver's images using an active IR illuminator and the software implementation for monitoring some visual behaviors that characterize a driver's level of vigilance. Six parameters are calculated: Percent eye closure (PERCLOS), eye closure duration, blink frequency, nodding frequency, face position, and fixed gaze. These parameters are combined using a fuzzy classifier to infer the level of inattentiveness of the driver. The use of multiple visual parameters and the fusion of these parameters yield a more robust and accurate inattention characterization than by using a single parameter. The system has been tested with different sequences recorded in night and day driving conditions in a motorway and with different users. Some experimental results and conclusions about the performance of the system are presented", "title": "" } ]
[ { "docid": "ca729733929e23acffbfec5138f42155", "text": "Lymphadenopathy is benign and self-limited in most patients. Etiologies include malignancy, infection, and autoimmune disorders, as well as medications and iatrogenic causes. The history and physical examination alone usually identify the cause of lymphadenopathy. When the cause is unknown, lymphadenopathy should be classified as localized or generalized. Patients with localized lymphadenopathy should be evaluated for etiologies typically associated with the region involved according to lymphatic drainage patterns. Generalized lymphadenopathy, defined as two or more involved regions, often indicates underlying systemic disease. Risk factors for malignancy include age older than 40 years, male sex, white race, supraclavicular location of the nodes, and presence of systemic symptoms such as fever, night sweats, and unexplained weight loss. Palpable supraclavicular, popliteal, and iliac nodes are abnormal, as are epitrochlear nodes greater than 5 mm in diameter. The workup may include blood tests, imaging, and biopsy depending on clinical presentation, location of the lymphadenopathy, and underlying risk factors. Biopsy options include fine-needle aspiration, core needle biopsy, or open excisional biopsy. Antibiotics may be used to treat acute unilateral cervical lymphadenitis, especially in children with systemic symptoms. Corticosteroids have limited usefulness in the management of unexplained lymphadenopathy and should not be used without an appropriate diagnosis.", "title": "" }, { "docid": "e613ef418da545958c2094c5cce8f4f1", "text": "This paper proposes a new visual SLAM technique that not only integrates 6 degrees of freedom (DOF) pose and dense structure but also simultaneously integrates the colour information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Another originality of the proposed approach with respect to the current state of the art lies in the minimisation of both colour (RGB) and depth (D) errors, whilst competing approaches only minimise geometry. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking.", "title": "" }, { "docid": "304393092575799920363fdcea0daca4", "text": "We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention.\n ClearView (1) observes normal executions to learn invariants thatcharacterize the application's normal behavior, (2) uses error detectors to distinguish normal executions from erroneous executions, (3) identifies violations of learned invariants that occur during erroneous executions, (4) generates candidate repair patches that enforce selected invariants by changing the state or flow of control to make the invariant true, and (5) observes the continued execution of patched applications to select the most successful patch.\n ClearView is designed to correct errors in software with high availability requirements. Aspects of ClearView that make it particularly appropriate for this context include its ability to generate patches without human intervention, apply and remove patchesto and from running applications without requiring restarts or otherwise perturbing the execution, and identify and discard ineffective or damaging patches by evaluating the continued behavior of patched applications.\n ClearView was evaluated in a Red Team exercise designed to test its ability to successfully survive attacks that exploit security vulnerabilities. A hostile external Red Team developed ten code injection exploits and used these exploits to repeatedly attack an application protected by ClearView. ClearView detected and blocked all of the attacks. For seven of the ten exploits, ClearView automatically generated patches that corrected the error, enabling the application to survive the attacks and continue on to successfully process subsequent inputs. Finally, the Red Team attempted to make Clear-View apply an undesirable patch, but ClearView's patch evaluation mechanism enabled ClearView to identify and discard both ineffective patches and damaging patches.", "title": "" }, { "docid": "cb2df8e27a3c284028d0fbb86652ae14", "text": "The large bulk of packets/flows in future core networks will require a highly efficient header processing in the switching elements. Simplifying lookup in core network switching elements is capital to transport data at high rates and with low latency. Flexible network hardware combined with agile network control is also an essential property for future software-defined networking. We argue that only further decoupling between the control and data planes will unlock the flexibility and agility in SDN for the design of new network solutions for core networks. This article proposes a new approach named KeyFlow to build a flexible network-fabricbased model. It replaces the table lookup in the forwarding engine by elementary operations relying on a residue number system. This provides us tools to design a stateless core network by still using OpenFlow centralized control. A proof of concept prototype is validated using the Mininet emulation environment and OpenFlow 1.0. The results indicate RTT reduction above 50 percent, especially for networks with densely populated flow tables. KeyFlow achieves above 30 percent reduction in keeping active flow state in the network.", "title": "" }, { "docid": "1701da2aed094fdcbfaca6c2252d2e53", "text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras.", "title": "" }, { "docid": "328d2b9a5786729245f18195f36ca75c", "text": "As CMOS technology is scaled down and adopted for many RF and millimeter-wave radio systems, design of T/R switches in CMOS has received considerable attention. Many T/R switches designed in 0.5 ¿m 65 nm CMOS processes have been reported. Table 4 summarizes these T/R switches. Some of them have become great candidates for WLAN and UWB radios. However, none of them met the requirements of mobile cellular and WPAN 60-GHz radios. CMOS device innovations and novel ideas such as artificial dielectric strips and bandgap structures may provide a comprehensive solution to the challenges of design of T/R switches for mobile cellular and 60-GHz radios.", "title": "" }, { "docid": "8d6b938e01c36ba3423a8b25a93bebce", "text": "State changes over time are inherent characteristics of stateful applications. So far, there are almost no attempts to make the past application history programmatically accessible or even modifiable. This is primarily due to the complexity of temporal changes and a difficult alignment with prevalent programming primitives and persistence strategies. Retroactive computing enables powerful capabilities though, including computations and predictions of alternate application timelines, post-hoc bug fixes, or retroactive state explorations. We propose an event-driven programming model that is oriented towards serverless computing and applies retroaction to the event sourcing paradigm. Our model is deliberately restrictive, but therefore keeps the complexity of retroactive operations in check. We introduce retro-λ, a runtime platform that implements the model and provides retroactive capabilites to its applications. While retro-λ only shows negligible performance overheads compared to similar solutions for running regular applications, it enables its users to execute retroactive computations on the application histories as part of its programming model.", "title": "" }, { "docid": "abdd688f821a450ebe0eb70d720989c2", "text": "In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.", "title": "" }, { "docid": "b05fc1f939ff50dc07dbbc170cd28478", "text": "A compact multiresonant antenna for octaband LTE/WWAN operation in the internal smartphone applications is proposed and discussed in this letter. With a small volume of 15×25×4 mm3, the presented antenna comprises two direct feeding strips and a chip-inductor-loaded two-branch shorted strip. The two direct feeding strips can provide two resonant modes at around 1750 and 2650 MHz, and the two-branch shorted strip can generate a double-resonance mode at about 725 and 812 MHz. Moreover, a three-element bandstop matching circuit is designed to generate an additional resonance for bandwidth enhancement of the lower band. Ultimately, up to five resonances are achieved to cover the desired 704-960- and 1710-2690-MHz bands. Simulated and measured results are presented to demonstrate the validity of the proposed antenna.", "title": "" }, { "docid": "710febdd18f40c9fc82f8a28039362cc", "text": "The paper deals with engineering an electric wheelchair from a common wheelchair and then developing a Brain Computer Interface (BCI) between the electric wheelchair and the human brain. A portable EEG headset and firmware signal processing together facilitate the movement of the wheelchair integrating mind activity and frequency of eye blinks of the patient sitting on the wheelchair with the help of Microcontroller Unit (MCU). The target population for the mind controlled wheelchair is the patients who are paralyzed below the neck and are unable to use conventional wheelchair interfaces. This project aims at creating a cost efficient solution, later intended to be distributed as an add-on conversion unit for a common manual wheelchair. A Neurosky mind wave headset is used to pick up EEG signals from the brain. This is a commercialized version of the Open-EEG Project. The signal obtained from EEG sensor is processed by the ARM microcontroller FRDM KL-25Z, a Freescale board. The microcontroller takes decision for determining the direction of motion of wheelchair based on floor detection and obstacle avoidance sensors mounted on wheelchair’s footplate. The MCU shows real time information on a color LCD interfaced to it. Joystick control of the wheelchair is also provided as an additional interface option that can be chosen from the menu system of the project.", "title": "" }, { "docid": "9c87c09676570500f6b87ed694aff1dc", "text": "The integration of Doubly Fed Induction Generator (DFIG) based wind farm into the power grid has become a major concern for power system engineers today. Voltage stability is a key factor to maintain DFIG-based wind farm in service during the grid disturbances. This paper investigates the implementation of STATCOM to overcome the voltage stability issue for DFIG-based wind farm connected to a distribution network. The study includes the implementation of a static synchronous compensator (STATCOM) as a dynamic reactive power compensator at the point of common coupling to maintain stable voltage by protecting DFIG-based wind farm interconnected to a distribution system from going offline during and after the disturbances. The developed system is simulated in MATLAB/Simulink and the results show that the STATCOM improves the transient voltage stability and therefore helps the wind turbine generator system to remain in service during grid faults.", "title": "" }, { "docid": "9ae370847ec965a3ce9c7636f8d6a726", "text": "In this paper we present a wearable device for control of home automation systems via hand gestures. This solution has many advantages over traditional home automation interfaces in that it can be used by those with loss of vision, motor skills, and mobility. By combining other sources of context with the pendant we can reduce the number and complexity of gestures while maintaining functionality. As users input gestures, the system can also analyze their movements for pathological tremors. This information can then be used for medical diagnosis, therapy, and emergency services.Currently, the Gesture Pendant can recognize control gestures with an accuracy of 95% and userdefined gestures with an accuracy of 97% It can detect tremors above 2HZ within .1 Hz.", "title": "" }, { "docid": "ab390e0bee6b8fb33cda52821c7787ff", "text": "Zero-day polymorphic worms pose a serious threat to the Internet security. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. Not only can they exploit unknown vulnerabilities but can also change their own representations on each new infection or can encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm thus, making their fingerprinting very difficult. Therefore, signature-based defenses and traditional security layers miss these stealthy and persistent threats. This paper provides a detailed survey to outline the research efforts in relation to detection of modern zero-day malware in form of zero-day polymorphic worms.", "title": "" }, { "docid": "c688d24fd8362a16a19f830260386775", "text": "We present a fast iterative algorithm for identifying the Support Vectors of a given set of points. Our algorithm works by maintaining a candidate Support Vector set. It uses a greedy approach to pick points for inclusion in the candidate set. When the addition of a point to the candidate set is blocked because of other points already present in the set we use a backtracking approach to prune away such points. To speed up convergence we initialize our algorithm with the nearest pair of points from opposite classes. We then use an optimization based approach to increment or prune the candidate Support Vector set. The algorithm makes repeated passes over the data to satisfy the KKT constraints. The memory requirements of our algorithm scale as O(|S|) in the average case, where|S| is the size of the Support Vector set. We show that the algorithm is extremely competitive as compared to other conventional iterative algorithms like SMO and the NPA. We present results on a variety of real life datasets to validate our claims.", "title": "" }, { "docid": "8cc28165debbb8cc430dc78098c0cd87", "text": "Aaron Kravitz, for their help with the data collection. We are grateful to Ole-Kristian Hope, Jan Mahrt-Smith, and seminar participants at the University of Toronto for useful comments. Abstract Managers make different decisions in countries with poor protection of investor rights and poor financial development. One possible explanation is that shareholder-wealth maximizing managers face different tradeoffs in such countries (the tradeoff theory). Alternatively, firms in such countries are less likely to be managed for the benefit of shareholders because the poor protection of investor rights makes it easier for management and controlling shareholders to appropriate corporate resources for their own benefit (the agency costs theory). Holdings of liquid assets by firms across countries are consistent with Keynes' transaction and precautionary demand for money theories. Firms in countries with greater GDP per capita hold more cash as predicted. Controlling for economic development, firms in countries with more risk and with poor protection of investor rights hold more cash. The tradeoff theory and the agency costs theory can both explain holdings of liquid assets across countries. However, the fact that a dollar of cash is worth less than $0.65 to the minority shareholders of firms in such countries but worth approximately $1 in countries with good protection of investor rights and high financial development is only consistent with the agency costs theory. 2 1. Introduction Recent work shows that countries where institutions that protect investor rights are weak perform poorly along a number of dimensions. In particular, these countries have lower growth, less well-developed financial markets, and more macroeconomic volatility. 1 To measure the quality of institutions, authors have used, for instance, indices of the risk of expropriation, the level of corruption, and the rule of law. Since poor institutions could result from poor economic performance rather than cause it, authors have also used the origin of a country's legal system (La 2003) as instruments for the quality of institutions. For the quality of institutions to matter for economic performance, it has to affect the actions of firms and individuals. Recent papers examine how dividend, investment, asset composition, and capital structure policies are related to the quality of institutions. 2 In this paper, we focus more directly on why firm policies depend on the quality of institutions. The quality of institutions can affect firm policies for two different reasons. First, a country's protection of investor rights may influence the relative prices or …", "title": "" }, { "docid": "b5b08bdd830144741cf900f6d41fe87d", "text": "A wealth of research has established that practice tests improve memory for the tested material. Although the benefits of practice tests are well documented, the mechanisms underlying testing effects are not well understood. We propose the mediator effectiveness hypothesis, which states that more-effective mediators (that is, information linking cues to targets) are generated during practice involving tests with restudy versus during restudy only. Effective mediators must be retrievable at time of test and must elicit the target response. We evaluated these two components of mediator effectiveness for learning foreign language translations during practice involving either test-restudy or restudy only. Supporting the mediator effectiveness hypothesis, test-restudy practice resulted in mediators that were more likely to be retrieved and more likely to elicit targets on a final test.", "title": "" }, { "docid": "007634725171f426691246c419f067ad", "text": "A flexible multidelay block frequency domain (MDF) adaptive filter is presented. The distinct feature of the MDF adaptive filter is to allow one to choose the size of an FFT tailored to the efficient use of a hardware, rather than the requirement of a specific application. The MDF adaptive filter also requires less memory and so reduces the requirement and cost of a hardware. In performance, the MDF adaptive filter introduces smaller block delay and is faster,.ideal for a time-varying system such as modeling an acoustic path in a teleconference room. This is achieved by using smaller block size, updating the weight vectors more often, and reducing the total execution time of the adaptive process. The MDF adaptive filter compares favorably to other frequency domain adaptive filters when its adaptation speed and misadjustment are tested in computer simulations.", "title": "" }, { "docid": "aec48ddea7f21cabb9648eec07c31dcd", "text": "High voltage Marx generator implementation using IGBT (Insulated Gate Bipolar Transistor) stacks is proposed in this paper. To protect the Marx generator at the moment of breakdown, AOCP (Active Over-Current Protection) part is included. The Marx generator is composed of 12 stages and each stage is made of IGBT stacks, two diode stacks, and capacitors. IGBT stack is used as a single switch. Diode stacks and inductors are used to charge the high voltage capacitor at each stage without power loss. These are also used to isolate input and high voltage negative output in high voltage generation mode. The proposed Marx generator implementation uses IGBT stack with a simple driver and has modular design. This system structure gives compactness and easiness to implement the total system. Some experimental and simulated results are included to verify the system performances in this paper.", "title": "" }, { "docid": "0b86a006b1f8e3a5e940daef25fe7d58", "text": "While drug toxicity (especially hepatotoxicity) is the most frequent reason cited for withdrawal of an approved drug, no simple solution exists to adequately predict such adverse events. Simple cytotoxicity assays in HepG2 cells are relatively insensitive to human hepatotoxic drugs in a retrospective analysis of marketed pharmaceuticals. In comparison, a panel of pre-lethal mechanistic cellular assays hold the promise to deliver a more sensitive approach to detect endpoint-specific drug toxicities. The panel of assays covered by this review includes steatosis, cholestasis, phospholipidosis, reactive intermediates, mitochondria membrane function, oxidative stress, and drug interactions. In addition, the use of metabolically competent cells or the introduction of major human hepatocytes in these in vitro studies allow a more complete picture of potential drug side effect. Since inter-individual therapeutic index (TI) may differ from patient to patient, the rational use of one or more of these cellular assay and targeted in vivo exposure data may allow pharmaceutical scientists to select drug candidates with a higher TI potential in the drug discovery phase.", "title": "" }, { "docid": "07abb64e3be2cccfc264c42379d27f9b", "text": "BACKGROUND\nContacts between patients, patients and health care workers (HCWs) and among HCWs represent one of the important routes of transmission of hospital-acquired infections (HAI). A detailed description and quantification of contacts in hospitals provides key information for HAIs epidemiology and for the design and validation of control measures.\n\n\nMETHODS AND FINDINGS\nWe used wearable sensors to detect close-range interactions (\"contacts\") between individuals in the geriatric unit of a university hospital. Contact events were measured with a spatial resolution of about 1.5 meters and a temporal resolution of 20 seconds. The study included 46 HCWs and 29 patients and lasted for 4 days and 4 nights. 14,037 contacts were recorded overall, 94.1% of which during daytime. The number and duration of contacts varied between mornings, afternoons and nights, and contact matrices describing the mixing patterns between HCW and patients were built for each time period. Contact patterns were qualitatively similar from one day to the next. 38% of the contacts occurred between pairs of HCWs and 6 HCWs accounted for 42% of all the contacts including at least one patient, suggesting a population of individuals who could potentially act as super-spreaders.\n\n\nCONCLUSIONS\nWearable sensors represent a novel tool for the measurement of contact patterns in hospitals. The collected data can provide information on important aspects that impact the spreading patterns of infectious diseases, such as the strong heterogeneity of contact numbers and durations across individuals, the variability in the number of contacts during a day, and the fraction of repeated contacts across days. This variability is however associated with a marked statistical stability of contact and mixing patterns across days. Our results highlight the need for such measurement efforts in order to correctly inform mathematical models of HAIs and use them to inform the design and evaluation of prevention strategies.", "title": "" } ]
scidocsrr
1e1acee4ebf3bd971192336f72d93292
Multiple-input single ended primary inductor converter (SEPIC) converter for distributed generation applications
[ { "docid": "2048695744ff2a7905622dfe671ddb88", "text": "Many applications call for high step-up dc–dc converters that do not require isolation. Some dc–dc converters can provide high step-up voltage gain, but with the penalty of either an extreme duty ratio or a large amount of circulating energy. DC–DC converters with coupled inductors can provide high voltage gain, but their efficiency is degraded by the losses associated with leakage inductors. Converters with active clamps recycle the leakage energy at the price of increasing topology complexity. A family of high-efficiency, high step-up dc–dc converters with simple topologies is proposed in this paper. The proposed converters, which use diodes and coupled windings instead of active switches to realize functions similar to those of active clamps, perform better than their active-clamp counterparts. High efficiency is achieved because the leakage energy is recycled and the output rectifier reverse-recovery problem is alleviated.", "title": "" } ]
[ { "docid": "c73b5b81fa75676e96309610b4c6ac81", "text": "We present a theory of excess stock market volatility, in which market movements are due to trades by very large institutional investors in relatively illiquid markets. Such trades generate significant spikes in returns and volume, even in the absence of important news about fundamentals. We derive the optimal trading behavior of these investors, which allows us to provide a unified explanation for apparently disconnected empirical regularities in returns, trading volume and investor size.", "title": "" }, { "docid": "23fc59a5a53906429a9e5d9cfb54bdc4", "text": "The greater palatine canal is an important anatomical structure that is often utilized as a pathway for infiltration of local anesthesia to affect sensation and hemostasis. Increased awareness of the length and anatomic variation in the anatomy of this structure is important when performing surgical procedures in this area (e.g., placement of osseointegrated dental implants). We examined the anatomy of the greater palatine canal using data obtained from CBCT scans of 500 subjects. Both right and left canals were viewed (N = 1000) in coronal and sagittal planes, and their paths and lengths determined. The average length of the greater palatine canal was 29 mm (±3 mm), with a range from 22 to 40 mm. Coronally, the most common anatomic pattern consisted of the canal traveling inferior-laterally for a distance then directly inferior for the remainder (43.3%). In the sagittal view, the canal traveled most frequently at an anterior-inferior angle (92.9%).", "title": "" }, { "docid": "d1b6df05dbb85f6fe96e22fbc6183c35", "text": "This paper presents an efficient algorithm using Local search, various heuristics and Tabu search elements, which is capable to find solution for huge instances of the n-queens (hundreds of millions n). The algorithm returns random solutions in short time even on an ordinary personal computer. There is no other faster algorithm in n-Queens bibliography as our presented algorithm so far.", "title": "" }, { "docid": "790d30535edadb8e6318b6907b8553f3", "text": "Learning to anticipate future events on the basis of past experience with the consequences of one's own behavior (operant conditioning) is a simple form of learning that humans share with most other animals, including invertebrates. Three model organisms have recently made significant contributions towards a mechanistic model of operant conditioning, because of their special technical advantages. Research using the fruit fly Drosophila melanogaster implicated the ignorant gene in operant conditioning in the heat-box, research on the sea slug Aplysia californica contributed a cellular mechanism of behavior selection at a convergence point of operant behavior and reward, and research on the pond snail Lymnaea stagnalis elucidated the role of a behavior-initiating neuron in operant conditioning. These insights demonstrate the usefulness of a variety of invertebrate model systems to complement and stimulate research in vertebrates.", "title": "" }, { "docid": "8da2450cbcb9b43d07eee187e5bf07f1", "text": "We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.", "title": "" }, { "docid": "c3500e2b50f70c81d7f2c4a425f12742", "text": "Material recognition is an important subtask in computer vision. In this paper, we aim for the identification of material categories from a single image captured under unknown illumination and view conditions. Therefore, we use several features which cover various aspects of material appearance and perform supervised classification using Support Vector Machines. We demonstrate the feasibility of our approach by testing on the challenging Flickr Material Database. Based on this dataset, we also carry out a comparison to a previously published work [Liu et al., ”Exploring Features in a Bayesian Framework for Material Recognition”, CVPR 2010] which uses Bayesian inference and reaches a recognition rate of 44.6% on this dataset and represents the current state-of the-art. With our SVM approach we obtain 53.1% and hence, significantly outperform this approach.", "title": "" }, { "docid": "808115043786372af3e3fb726cc3e191", "text": "Scapy is a free and open source packet manipulation environment written in Python language. In this paper we present a Modbus extension to Scapy, and show how this environment can be used to build tools for security analysis of industrial network protocols. Our implementation can be extended to other industrial network protocols and can help security analysts to understand how these protocols work under attacks or adverse conditions.", "title": "" }, { "docid": "4b6b9539468db238d92e9762b2650b61", "text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: m.rosemann@qut.edu.au J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke", "title": "" }, { "docid": "a39c9399742571ca389813ffb7e7657e", "text": "Developed agriculture needs to find new ways to improve efficiency. One approach is to utilise available information technologies in the form of more intelligent machines to reduce and target energy inputs in more effective ways than in the past. Precision Farming has shown benefits of this approach but we can now move towards a new generation of equipment. The advent of autonomous system architectures gives us the opportunity to develop a complete new range of agricultural equipment based on small smart machines that can do the right thing, in the right place, at the right time in the right way.", "title": "" }, { "docid": "c9a5d230bd49be561f879da43a593fb8", "text": "Schizophrenia is a syndrome that is typically accompanied by delusions and hallucinations that might be associated with insular pathology. Music intervention, as a complementary therapy, is commonly used to improve psychiatric symptoms in the maintenance stage of schizophrenia. In this study, we employed a longitudinal design to assess the effects of listening to Mozart music on the insular functional connectivity (FC) in patients with schizophrenia. Thirty-six schizophrenia patients were randomly divided into two equal groups as follows: the music intervention (MTSZ) group, which received a 1-month music intervention series combined with antipsychotic drugs, and the no-music intervention (UMTSZ) group, which was treated solely with antipsychotic drugs. Resting-state functional magnetic resonance imaging (fMRI) scans were performed at the following three timepoints: baseline, 1 month after baseline and 6 months after baseline. Nineteen healthy participants were recruited as controls. An FC analysis seeded in the insular subregions and machine learning techniques were used to examine intervention-related changes. After 1 month of listening to Mozart music, the MTSZ showed increased FC in the dorsal anterior insula (dAI) and posterior insular (PI) networks, including the dAI-ACC, PI-pre/postcentral cortices, and PI-ACC connectivity. However, these enhanced FCs had vanished in follow-up visits after 6 months. Additionally, a support vector regression on the FC of the dAI-ACC at baseline yielded a significant prediction of relative symptom remission in response to music intervention. Furthermore, the validation analyses revealed that 1 month of music intervention could facilitate improvement of the insular FC in schizophrenia. Together, these findings revealed that the insular cortex could potentially be an important region in music intervention for patients with schizophrenia, thus improving the patients' psychiatric symptoms through normalizing the salience and sensorimotor networks.", "title": "" }, { "docid": "7f6e03069810f9d7ef68d6a775b8849b", "text": "For more than a century, the déjà vu experience has been examined through retrospective surveys, prospective surveys, and case studies. About 60% of the population has experienced déjà vu, and its frequency decreases with age. Déjà vu appears to be associated with stress and fatigue, and it shows a positive relationship with socioeconomic level and education. Scientific explanations of déjà vu fall into 4 categories: dual processing (2 cognitive processes momentarily out of synchrony), neurological (seizure, disruption in neuronal transmission), memory (implicit familiarity of unrecognized stimuli),and attentional (unattended perception followed by attended perception). Systematic research is needed on the prevalence and etiology of this culturally familiar cognitive experience, and several laboratory models may help clarify this illusion of recognition.", "title": "" }, { "docid": "4c627f29b8006b81f4a2415004775cf9", "text": "Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.", "title": "" }, { "docid": "a3cb91fb614f3f772a277b3d125c4088", "text": "Exploring the inherent technical challenges in realizing the potential of Big Data.", "title": "" }, { "docid": "724c74408f59edaf1b1b4859ccd43ee9", "text": "Motion sickness is a common disturbance occurring in healthy people as a physiological response to exposure to motion stimuli that are unexpected on the basis of previous experience. The motion can be either real, and therefore perceived by the vestibular system, or illusory, as in the case of visual illusion. A multitude of studies has been performed in the last decades, substantiating different nauseogenic stimuli, studying their specific characteristics, proposing unifying theories, and testing possible countermeasures. Several reviews focused on one of these aspects; however, the link between specific nauseogenic stimuli and the unifying theories and models is often not clearly detailed. Readers unfamiliar with the topic, but studying a condition that may involve motion sickness, can therefore have difficulties to understand why a specific stimulus will induce motion sickness. So far, this general audience struggles to take advantage of the solid basis provided by existing theories and models. This review focuses on vestibular-only motion sickness, listing the relevant motion stimuli, clarifying the sensory signals involved, and framing them in the context of the current theories.", "title": "" }, { "docid": "af7f87ab07c3851932b191ac1dea2cc9", "text": "Weather significantly impacts society for better and for worse. For example, severe weather hazards caused over $7.9 billion of property damage in 2015 (National Oceanic and Atmospheric Administration/National Centers for Environmental Information 2016; CoreLogic 2016). The National Academies of Sciences, Engineering, and Medicine (2016) cites improving forecasting of such events as a critical priority, and the European Centre for Medium-Range Weather Forecasts (ECMWF) recently announced goals for 2025 (ECMWF 2016) that stress the importance of improving these forecasts. On the positive side, improvements in forecasting solar power, which increasingly impacts the electrical grid, are expected to save utility companies $455 million by 2040 (Haupt et al. 2016). Additional savings can be found through improved forecasting in other areas of computational sustainability. Computational sustainability is a new and growing interdisciplinary research area focusing on computational solutions for questions of Earth sustainability. In recent years, operational numerical weather prediction (NWP) models have significantly increased in resolution (e.g., Weygandt et al. 2009). At the same time, the number and quality of observational systems has grown, and new systems, such as Geostationary Operational Environmental Satellite R series (GOES-R), will generate high-quality data at fine spatial and temporal resolutions. These data contain valuable information, but their variety and volume can be overwhelming to forecasters, and this can hinder decision-making if not handled properly (Karstens et al. 2015, 2016). This data deluge is commonly termed “big data.” Artificial intelligence (AI) and related data science methods have been developed to work with big data across a variety of disciplines. Applying AI techniques in conjunction with a physical understanding of the environment can substantially improve prediction skill for multiple types of high-impact weather. This approach expands on traditional model output statistics (MOS) techniques (Glahn and Lowry 1972), which derive probabilistic, categorical, and deterministic forecasts from NWP model output. Because of their simplicity and longevity, forecasters have gained trust in MOS techniques. AI techniques provide a number of advantages, including easily generalizing spatially and temporally, handling large numbers of predictor variables, integrating physical understanding into the", "title": "" }, { "docid": "adddebf272a3b0fe510ea04ed7cc3837", "text": "PURPOSE\nTo explore the association of angiographic nonperfusion in focal and diffuse recalcitrant diabetic macular edema (DME) in diabetic retinopathy (DR).\n\n\nDESIGN\nA retrospective, observational case series of patients with the diagnosis of recalcitrant DME for at least 2 years placed into 1 of 4 cohorts based on the degree of DR.\n\n\nMETHODS\nA total of 148 eyes of 76 patients met the inclusion criteria at 1 academic institution. Ultra-widefield fluorescein angiography (FA) images and spectral-domain optical coherence tomography (SD OCT) images were obtained on all patients. Ultra-widefield FA images were graded for quantity of nonperfusion, which was used to calculate ischemic index. Main outcome measures were mean ischemic index, mean change in central macular thickness (CMT), and mean number of macular photocoagulation treatments over the 2-year study period.\n\n\nRESULTS\nThe mean ischemic index was 47% (SD 25%; range 0%-99%). The mean ischemic index of eyes within Cohorts 1, 2, 3, and 4 was 0%, 34% (range 16%-51%), 53% (range 32%-89%), and 65% (range 47%-99%), respectively. The mean percentage decrease in CMT in Cohorts 1, 2, 3, and 4 were 25.2%, 19.1%, 11.6%, and 7.2%, respectively. The mean number of macular photocoagulation treatments in Cohorts 1, 2, 3, and 4 was 2.3, 4.8, 5.3, and 5.7, respectively.\n\n\nCONCLUSIONS\nEyes with larger areas of retinal nonperfusion and greater severity of DR were found to have the most recalcitrant DME, as evidenced by a greater number of macular photocoagulation treatments and less reduction in SD OCT CMT compared with eyes without retinal nonperfusion. Areas of untreated retinal nonperfusion may generate biochemical mediators that promote ischemia and recalcitrant DME.", "title": "" }, { "docid": "f89236f0cf15d8fa64aca8682d87447f", "text": "This research targeted the learning preferences, goals and motivations, achievements, challenges, and possibilities for life change of self-directed online learners who subscribed to the monthly OpenCourseWare (OCW) e-newsletter from MIT. Data collection included a 25-item survey of 1,429 newsletter subscribers; 613 of whom also completed an additional 15 open-ended survey items. The 25 close-ended survey findings indicated that respondents used a wide range of devices and places to learn for their self-directed learning needs. Key motivational factors included curiosity, interest, and internal need for self-improvement. Factors leading to success or personal change included freedom to learn, resource abundance, choice, control, and fun. In terms of achievements, respondents were learning both specific skills as well as more general skills that help them advance in their careers. Science, math, and foreign language skills were the most desired by the survey respondents. The key obstacles or challenges faced were time, lack of high quality open resources, and membership or technology fees. Several brief stories of life change across different age ranges are documented. Among the chief implications is that learning something new to enhance one’s life or to help others is often more important than course transcript credit or a certificate of completion.", "title": "" }, { "docid": "c0283c87e2a8305ba43ce87bf74a56a6", "text": "Real-world deployments of accelerometer-based human activity recognition systems need to be carefully configured regarding the sampling rate used for measuring acceleration. Whilst a low sampling rate saves considerable energy, as well as transmission bandwidth and storage capacity, it is also prone to omitting relevant signal details that are of interest for contemporary analysis tasks. In this paper we present a pragmatic approach to optimising sampling rates of accelerometers that effectively tailors recognition systems to particular scenarios, thereby only relying on unlabelled sample data from the domain. Employing statistical tests we analyse the properties of accelerometer data and determine optimal sampling rates through similarity analysis. We demonstrate the effectiveness of our method in experiments on 5 benchmark datasets where we determine optimal sampling rates that are each substantially below those originally used whilst maintaining the accuracy of reference recognition systems. c © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5a1b49162856c8f2b59fec0e063246e9", "text": "Supply chain network design (SCND) is one of the most crucial planning problems in supply chain management (SCM). Nowadays, design decisions should be viable enough to function well under complex and uncertain business environments for many years or decades. Therefore, it is essential to make these decisions in the presence of uncertainty, as over the last two decades, a large number of relevant publications have emphasized its importance. The aim of this paper is to provide a comprehensive review of studies in the fields of SCND and reverse logistics network design under uncertainty. The paper is organized in two main parts to investigate the basic features of these studies. In the first part, planning decisions, network structure, paradigms and aspects related to SCM are discussed. In the second part, existing optimization techniques for dealing with uncertainty such as recourse-based stochastic programming, risk-averse stochastic programming, robust optimization, and fuzzy mathematical programming are explored in terms of mathematical modeling and solution approaches. Finally, the drawbacks and missing aspects of the related literature are highlighted and a list of potential issues for future research directions is recommended. © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license. ( http://creativecommons.org/licenses/by/4.0/ )", "title": "" }, { "docid": "979c6c841b3435c3a8995be7b506f6ea", "text": "The immune response goes haywire during sepsis, a deadly condition triggered by infection. Richard S. Hotchkiss and his colleagues take the focus off of the prevailing view that the key aspect of this response is an exuberant inflammatory reaction. They assess recent human studies bolstering the notion that immunosuppression is also a major contributor to the disease. Many people with sepsis succumb to cardiac dysfunction, a process examined by Peter Ward. He showcases the factors that cause cardiomyocyte contractility to wane during the disease.", "title": "" } ]
scidocsrr
afcbf313e35c7d3d53acd0af71920990
Encoder Based Lifelong Learning
[ { "docid": "b82eef6e71621e982ae6e5902dc85c06", "text": "In this paper we introduce a model of lifelong learning, based on a Network of Experts. New tasks / experts are learned and added to the model sequentially, building on what was learned before. To ensure scalability of this process, data from previous tasks cannot be stored and hence is not available when learning a new task. A critical issue in such context, not addressed in the literature so far, relates to the decision which expert to deploy at test time. We introduce a set of gating autoencoders that learn a representation for the task at hand, and, at test time, automatically forward the test sample to the relevant expert. This also brings memory efficiency as only one expert network has to be loaded into memory at any given time. Further, the autoencoders inherently capture the relatedness of one task to another, based on which the most relevant prior model to be used for training a new expert, with fine-tuning or learning-without-forgetting, can be selected. We evaluate our method on image classification and video prediction problems.", "title": "" } ]
[ { "docid": "0ae0e78ac068d8bc27d575d90293c27b", "text": "Deep web refers to the hidden part of the Web that remains unavailable for standard Web crawlers. To obtain content of Deep Web is challenging and has been acknowledged as a significant gap in the coverage of search engines. To this end, the paper proposes a novel deep web crawling framework based on reinforcement learning, in which the crawler is regarded as an agent and deep web database as the environment. The agent perceives its current state and selects an action (query) to submit to the environment according to Q-value. The framework not only enables crawlers to learn a promising crawling strategy from its own experience, but also allows for utilizing diverse features of query keywords. Experimental results show that the method outperforms the state of art methods in terms of crawling capability and breaks through the assumption of full-text search implied by existing methods.", "title": "" }, { "docid": "a3f0f40a97bc9d57388554abf8138c11", "text": "In this paper, we give a universal completion of the ZX-calculus for the whole of pure qubit quantum mechanics. This proof is based on the completeness of another graphical language: the ZW-calculus, with direct translations between these two graphical systems.", "title": "" }, { "docid": "191d247a3d4a5c469adc352f22f75b56", "text": "Read and write assist techniques are now commonly used to lower the minimum operating voltage (Vmin) of an SRAM. In this paper, we review the efficacy of four leading write-assist (WA) techniques and their behavior at lower supply voltages in commercial SRAMs from 65nm, 45nm and 32nm low power technology nodes. In particular, the word-line boosting and negative bit-line WA techniques seem most promising at lower voltages. These two techniques help reduce the value of WLcrit by a factor of ~2.5X at 0.7V and also decrease the 3σ spread by ~3.3X, thus significantly reducing the impact of process variations. These write-assist techniques also impact the dynamic read noise margin (DRNM) of half-selected cells during the write operation. The negative bit-line WA technique has virtually no impact on the DRNM but all other WA techniques degrade the DRNM by 10--15%. In conjunction with the benefit (decrease in WLcrit) and the negative impact (decrease in DRNM), overhead of implementation in terms of area and performance must be analyzed to choose the best write-assist technique for lowering the SRAM Vmin.", "title": "" }, { "docid": "e72ed2b388577122402831d4cd75aa0f", "text": "Development and testing of a compact 200-kV, 10-kJ/s industrial-grade power supply for capacitor charging applications is described. Pulse repetition rate (PRR) can be from single shot to 250 Hz, depending on the storage capacitance. Energy dosing (ED) topology enables high efficiency at switching frequency of up to 55 kHz using standard slow IGBTs. Circuit simulation examples are given. They clearly show zero-current switching at variable frequency during the charge set by the ED governing equations. Peak power drawn from the primary source is about only 60% higher than the average power, which lowers the stress on the input rectifier. Insulation design was assisted by electrostatic field analyses. Field plots of the main transformer insulation illustrate field distribution and stresses in it. Subsystem and system tests were performed including limited insulation life test. A precision, high-impedance, fast HV divider was developed for measuring voltages up to 250 kV with risetime down to 10 μs. The charger was successfully tested with stored energy of up to 550 J at discharge via a custom designed open-air spark gap at PRR up to 20 Hz (in bursts). Future work will include testing at customer sites.", "title": "" }, { "docid": "1e40fbed88643aa696d74460dc489358", "text": "We introduce a statistical model for microarray gene expression data that comprises data calibration, the quantification of differential expression, and the quantification of measurement error. In particular, we derive a transformation h for intensity measurements, and a difference statistic Deltah whose variance is approximately constant along the whole intensity range. This forms a basis for statistical inference from microarray data, and provides a rational data pre-processing strategy for multivariate analyses. For the transformation h, the parametric form h(x)=arsinh(a+bx) is derived from a model of the variance-versus-mean dependence for microarray intensity data, using the method of variance stabilizing transformations. For large intensities, h coincides with the logarithmic transformation, and Deltah with the log-ratio. The parameters of h together with those of the calibration between experiments are estimated with a robust variant of maximum-likelihood estimation. We demonstrate our approach on data sets from different experimental platforms, including two-colour cDNA arrays and a series of Affymetrix oligonucleotide arrays.", "title": "" }, { "docid": "1dd1d5304cad393ade793b3435858ce4", "text": "With today‘s ubiquity and popularity of social network applications, the ability to analyze and understand large networks in an ef cient manner becomes critically important. However, as networks become larger and more complex, reasoning about social dynamics via simple statistics is not a feasible option. To overcome these limitations, we can rely on visual metaphors. Visualization nowadays is no longer a passive process that produces images from a set of numbers. Recent years have witnessed a convergence of social network analytics and visualization, coupled with interaction, that is changing the way analysts understand and characterize social networks. In this chapter, we discuss the main goal of visualization and how different metaphors are aimed towards elucidating different aspects of social networks, such as structure and semantics. We also describe a number of methods where analytics and visualization are interwoven towards providing a better comprehension of social structure and dynamics.", "title": "" }, { "docid": "3a98dd611afcfd6d51c319bde3b84cc9", "text": "This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/3, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.", "title": "" }, { "docid": "a3d9290555010116e05506fe43f77a4a", "text": "We present a data processing pipeline to online estimate ego-motion and build a map of the traversed environment, leveraging data from a 3D laser, a camera, and an IMU. Different from traditional methods that use a Kalman filter or factor-graph optimization, the proposed method employs a sequential, multi-layer processing pipeline, solving for motion from coarse to fine. The resulting system enables high-frequency, low-latency ego-motion estimation, along with dense, accurate 3D map registration. Further, the system is capable of handling sensor degradation by automatic reconfiguration bypassing failure modules. Therefore, it can operate in the presence of highly dynamic motion as well as in dark, texture-less, and structure-less environments. During experiments, the system demonstrates 0.22% of relative position drift over 9.3km of navigation and robustness w.r.t aggressive motion such as highway speed driving (up to 33m/s).", "title": "" }, { "docid": "1da19f806430077f7ad957dbeb0cb8d1", "text": "BACKGROUND\nTo date, periorbital melanosis is an ill-defined entity. The condition has been stated to be darkening of the skin around the eyes, dark circles, infraorbital darkening and so on.\n\n\nAIMS\nThis study was aimed at exploring the nature of pigmentation in periorbital melanosis.\n\n\nMETHODS\nOne hundred consecutive patients of periorbital melanosis were examined and investigated to define periorbital melanosis. Extent of periorbital melanosis was determined by clinical examination. Wood's lamp examination was performed in all the patients to determine the depth of pigmentation. A 2-mm punch biopsy was carried out in 17 of 100 patients.\n\n\nRESULTS\nIn 92 (92%) patients periorbital melanosis was an extension of pigmentary demarcation line over the face (PDL-F).\n\n\nCONCLUSION\nPeriorbital melanosis and pigmentary demarcation line of the face are not two different conditions; rather they are two different manifestations of the same disease.", "title": "" }, { "docid": "749cfda68d5d7f09c0861dc723563db9", "text": "BACKGROUND\nOnline social networking use has been integrated into adolescents' daily life and the intensity of online social networking use may have important consequences on adolescents' well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS) and validate it among junior middle school students in China.\n\n\nMETHODS\nA total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods.\n\n\nRESULTS\nTwo factors, Social Function Use Intensity (SFUI) and Entertainment Function Use Intensity (EFUI), were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach's alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, p<0.001). As expected, the SNAIS and its subscale scores were correlated significantly with emotional connection to social networking, social networking addiction, Internet addiction, and characteristics related to social networking use.\n\n\nCONCLUSIONS\nThe SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population.", "title": "" }, { "docid": "b78d4070285ccc14cf38cda4c88401b4", "text": "Information Centric Networking (ICN) as an emerging paradigm for the Future Internet has initially been rather focusing on bandwidth savings in wired networks, but there might also be some significant potential to support communication in mobile wireless networks as well as opportunistic network scenarios, where end systems have spontaneous but time-limited contact to exchange data. This chapter addresses the reasoning why ICN has an important role in mobile and opportunistic networks by identifying several challenges in mobile and opportunistic Information-Centric Networks and discussing appropriate solutions for them. In particular, it discusses the issues of receiver and source mobility. Source mobility needs special attention. Solutions based on routing protocol extensions, indirection, and separation of name resolution and data transfer are discussed. Moreover, the chapter presents solutions for problems in opportunistic Information-Centric Networks. Among those are mechanisms for efficient content discovery in neighbour nodes, resume mechanisms to recover from intermittent connectivity disruptions, a novel agent delegation mechanisms to offload content discovery and delivery to mobile agent nodes, and the exploitation of overhearing to populate routing tables of mobile nodes. Some preliminary performance evaluation results of these developed mechanisms are", "title": "" }, { "docid": "ec6bec86a80b4f100afc3a0a7681afba", "text": "There is a high risk of venous thromboembolism when patients are immobilised following trauma. The combination of low-molecular-weight heparin (LMWH) with graduated compression stockings is frequently used in orthopaedic surgery to try and prevent this, but a relatively high incidence of thromboembolic events remains. Mechanical devices which perform continuous passive motion imitate contractions and increase the volume and velocity of venous flow. In this study 227 trauma patients were randomised to receive either treatment with the Arthroflow device and LMWH or only with the latter. The Arthroflow device passively extends and plantarflexes the feet. Patients were assessed initially by venous-occlusion plethysmography, compression ultrasonography and continuous wave Doppler, which were repeated weekly without knowledge of the category of randomisation. Those who showed evidence of deep-vein thrombosis underwent venography for confirmation. The incidence of deep-vein thrombosis was 25% in the LMWH group compared with 3.6% in those who had additional treatment with the Arthroflow device (p < 0.001). There were no substantial complications or problems of non-compliance with the Arthroflow device. Logistic regression analysis of the risk factors of deep-vein thrombosis showed high odds ratios for operation (4.1), immobilisation (4.3), older than 40 years of age (2.8) and obesity (2.2).", "title": "" }, { "docid": "f698eb36fb75c6eae220cf02e41bdc44", "text": "In this paper, an enhanced hierarchical control structure with multiple current loop damping schemes for voltage unbalance and harmonics compensation (UHC) in ac islanded microgrid is proposed to address unequal power sharing problems. The distributed generation (DG) is properly controlled to autonomously compensate voltage unbalance and harmonics while sharing the compensation effort for the real power, reactive power, and unbalance and harmonic powers. The proposed control system of the microgrid mainly consists of the positive sequence real and reactive power droop controllers, voltage and current controllers, the selective virtual impedance loop, the unbalance and harmonics compensators, the secondary control for voltage amplitude and frequency restoration, and the auxiliary control to achieve a high-voltage quality at the point of common coupling. By using the proposed unbalance and harmonics compensation, the auxiliary control, and the virtual positive/negative-sequence impedance loops at fundamental frequency, and the virtual variable harmonic impedance loop at harmonic frequencies, an accurate power sharing is achieved. Moreover, the low bandwidth communication (LBC) technique is adopted to send the compensation command of the secondary control and auxiliary control from the microgrid control center to the local controllers of DG unit. Finally, the hardware-in-the-loop results using dSPACE 1006 platform are presented to demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "9f177381c2ba4c6c90faee339910c6c6", "text": "Behavior genetics has demonstrated that genetic variance is an important component of variation for all behavioral outcomes , but variation among families is not. These results have led some critics of behavior genetics to conclude that heritability is so ubiquitous as to have few consequences for scientific understanding of development , while some behavior genetic partisans have concluded that family environment is not an important cause of developmental outcomes. Both views are incorrect. Genotype is in fact a more systematic source of variability than environment, but for reasons that are methodological rather than substantive. Development is fundamentally nonlinear, interactive, and difficult to control experimentally. Twin studies offer a useful methodologi-cal shortcut, but do not show that genes are more fundamental than environments. The nature-nurture debate is over. The bottom line is that everything is heritable, an outcome that has taken all sides of the nature-nurture debate by surprise. Irving Gottesman and I have suggested that the universal influence of genes on behavior be enshrined as the first law of behavior genetics (Turkheimer & Gottesman, 1991), and at the risk of naming laws that I can take no credit for discovering, it is worth stating the nearly unanimous results of behavior genetics in a more formal manner. ● First Law. All human behavioral traits are heritable. ● Second Law. The effect of being raised in the same family is smaller than the effect of genes. ● Third Law. A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families. It is not my purpose in this brief article to defend these three laws against the many exceptions that might be claimed. The point is that now that the empirical facts are in and no longer a matter of serious controversy, it is time to turn attention to what the three laws mean, to the implications of the genetics of behavior for an understanding of complex human behavior and its development. VARIANCE AND CAUSATION IN BEHAVIORAL DEVELOPMENT If the first two laws are taken literally , they seem to herald a great victory for the nature side of the old debate: Genes matter, families do not. To understand why such views are at best an oversimplification of a complex reality, it is necessary to consider the newest wave of opposition that behavior genetics has generated. These new critics , whose most …", "title": "" }, { "docid": "ab92c8ded0001d4103be4e7a8ee3a1f7", "text": "Metabolic syndrome defines a cluster of interrelated risk factors for cardiovascular disease and diabetes mellitus. These factors include metabolic abnormalities, such as hyperglycemia, elevated triglyceride levels, low high-density lipoprotein cholesterol levels, high blood pressure, and obesity, mainly central adiposity. In this context, extracellular vesicles (EVs) may represent novel effectors that might help to elucidate disease-specific pathways in metabolic disease. Indeed, EVs (a terminology that encompasses microparticles, exosomes, and apoptotic bodies) are emerging as a novel mean of cell-to-cell communication in physiology and pathology because they represent a new way to convey fundamental information between cells. These microstructures contain proteins, lipids, and genetic information able to modify the phenotype and function of the target cells. EVs carry specific markers of the cell of origin that make possible monitoring their fluctuations in the circulation as potential biomarkers inasmuch their circulating levels are increased in metabolic syndrome patients. Because of the mixed components of EVs, the content or the number of EVs derived from distinct cells of origin, the mode of cell stimulation, and the ensuing mechanisms for their production, it is difficult to attribute specific functions as drivers or biomarkers of diseases. This review reports recent data of EVs from different origins, including endothelial, smooth muscle cells, macrophages, hepatocytes, adipocytes, skeletal muscle, and finally, those from microbiota as bioeffectors of message, leading to metabolic syndrome. Depicting the complexity of the mechanisms involved in their functions reinforce the hypothesis that EVs are valid biomarkers, and they represent targets that can be harnessed for innovative therapeutic approaches.", "title": "" }, { "docid": "4dc89a72df7859af65b7deac167230a2", "text": "The rapid expansion of the web is causing the constant growth of information, leading to several problems such as increased difficulty of extracting potentially useful knowledge. Web content mining confronts this problem gathering explicit information from different web sites for its access and knowledge discovery. Query interfaces of web databases share common building blocks. After extracting information with parsing approach, we use a new data mining algorithm to match a large number of schemas in databases at a time. Using this algorithm increases the speed of information matching. In addition, instead of simple 1:1 matching, they do complex (m:n) matching between query interfaces. In this paper we present a novel correlation mining algorithm that matches correlated attributes with smaller cost. This algorithm uses Jaccard measure to distinguish positive and negative correlated attributes. After that, system matches the user query with different query interfaces in special domain and finally chooses the nearest query interface with user query to answer to it. Keywords—Content mining, complex matching, correlation mining, information extraction.", "title": "" }, { "docid": "44410c17138dae6a9935769b4c79e1a7", "text": "In many multi-label learning problems, especially as the number of labels grow, it is challenging to gather completely annotated data. This work presents a new approach for multi-label learning from incomplete annotations. The main assumption is that because of label correlation, the true label matrix as well as the soft predictions of classifiers shall be approximately low rank. We introduce a posterior regularization technique which enforces soft constraints on the classifiers, regularizing them to prefer sparse and low-rank predictions. Avoiding strict lowrank constraints results in classifiers which better fit the real data. The model can be trained efficiently using EM and stochastic gradient descent. Experiments in both the image and text domains demonstrate the contributions of each modeling assumption and show that the proposed approach achieves state-of-the-art performance on a number of challenging datasets.", "title": "" }, { "docid": "14835b93b580081b0398e5e370b72c2c", "text": "In order for autonomous vehicles to achieve life-long operation in outdoor environments, navigation systems must be able to cope with visual change—whether it’s short term, such as variable lighting or weather conditions, or long term, such as different seasons. As a Global Positioning System (GPS) is not always reliable, autonomous vehicles must be self sufficient with onboard sensors. This thesis examines the problem of localisation against a known map across extreme lighting and weather conditions using only a stereo camera as the primary sensor. The method presented departs from traditional techniques that blindly apply out-of-the-box interest-point detectors to all images of all places. This naive approach fails to take into account any prior knowledge that exists about the environment in which the robot is operating. Furthermore, the point-feature approach often fails when there are dramatic appearance changes, as associating low-level features such as corners or edges is extremely difficult and sometimes not possible. By leveraging knowledge of prior appearance, this thesis presents an unsupervised method for learning a set of distinctive and stable (i.e., stable under appearance changes) feature detectors that are unique to a specific place in the environment. In other words, we learn place-dependent feature detectors that enable vastly superior performance in terms of robustness in exchange for a reduced, but tolerable metric precision. By folding in a method for masking distracting objects in dynamic environments and examining a simple model for external illuminates, such as the sun, this thesis presents a robust localisation system that is able to achieve metric estimates from night-today or summer-to-winter conditions. Results are presented from various locations in the UK, including the Begbroke Science Park, Woodstock, Oxford, and central London. Statement of Authorship This thesis is submitted to the Department of Engineering Science, University of Oxford, in fulfilment of the requirements for the degree of Doctor of Philosophy. This thesis is entirely my own work, and except where otherwise stated, describes my own research. Colin McManus, Lady Margaret Hall Funding The work described in this thesis was funded by Nissan Motors.", "title": "" }, { "docid": "10cf5eed6ed3a153b8302ab2de3ebca7", "text": "Olive is one of the most ancient crop plants and the World Olive Germplasm Bank of Cordoba (WOGBC), Spain, is one of the world’s largest collections of olive germplasm. We used 33 SSR (Simple Sequence Repeats) markers and 11 morphological characteristics of the endocarp to characterise, identify and authenticate 824 trees, representing 499 accessions from 21 countries of origin, from the WOGBC collection. The SSR markers exhibited high variability and information content. Of 332 cultivars identified in this study based on unique combinations of SSR genotypes and endocarp morphologies, 200 were authenticated by genotypic and morphological markers matches with authentic control samples. We found 130 SSR genotypes that we considered as molecular variants because they showed minimal molecular differences but the same morphological profile than 48 catalogued cultivars. We reported 15 previously described and 37 new cases of synonyms as well as 26 previously described and seven new cases of homonyms. We detected several errors in accession labelling, which may have occurred at any step during establishment of plants in the collection. Nested sets of 5, 10 and 17 SSRs were proposed to progressively and efficiently identify all of the genotypes studied here. The study provides a useful protocol for the characterisation, identification and authentication of any olive germplasm bank that has facilitated the establishment of a repository of true-to-type cultivars at the WOGBC.", "title": "" }, { "docid": "41ceb618f20b82eaa65588045b609dcb", "text": "In decision making under uncertainty there are two main questions that need to be evaluated: i) What are the future consequences and associated uncertainties of an action, and ii) what is a good (or right) decision or action. Philosophically these issues are categorised as epistemic questions (i.e. questions of knowledge) and ethical questions (i.e. questions of moral and norms). This paper discusses the second issue, and evaluates different bases for a good decision, using different ethical theories as a starting point. This includes the utilitarian ethics of Bentley and Mills, and deontological ethics of Kant, Rawls and Habermas. The paper addresses various principles in risk management and risk related decision making, including cost benefit analysis, minimum safety criterion, the ALARP principle and the precautionary principle.", "title": "" } ]
scidocsrr
5b57fc4f9326af53596dbb0c6e09bc5e
Binary Shapelet Transform for Multiclass Time Series Classification
[ { "docid": "88be12fdd7ec90a7af7337f3d29b2130", "text": "Classification of time series has been attracting great interest over the past decade. While dozens of techniques have been introduced, recent empirical evidence has strongly suggested that the simple nearest neighbor algorithm is very difficult to beat for most time series problems, especially for large-scale datasets. While this may be considered good news, given the simplicity of implementing the nearest neighbor algorithm, there are some negative consequences of this. First, the nearest neighbor algorithm requires storing and searching the entire dataset, resulting in a high time and space complexity that limits its applicability, especially on resource-limited sensors. Second, beyond mere classification accuracy, we often wish to gain some insight into the data and to make the classification result more explainable, which global characteristics of the nearest neighbor cannot provide. In this work we introduce a new time series primitive, time series shapelets, which addresses these limitations. Informally, shapelets are time series subsequences which are in some sense maximally representative of a class. We can use the distance to the shapelet, rather than the distance to the nearest neighbor to classify objects. As we shall show with extensive empirical evaluations in diverse domains, classification algorithms based on the time series shapelet primitives can be interpretable, more accurate, and significantly faster than state-of-the-art classifiers.", "title": "" }, { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "8609f49cc78acc1ba25e83c8e68040a6", "text": "Time series shapelets are small, local patterns in a time series that are highly predictive of a class and are thus very useful features for building classifiers and for certain visualization and summarization tasks. While shapelets were introduced only recently, they have already seen significant adoption and extension in the community. Despite their immense potential as a data mining primitive, there are two important limitations of shapelets. First, their expressiveness is limited to simple binary presence/absence questions. Second, even though shapelets are computed offline, the time taken to compute them is significant. In this work, we address the latter problem by introducing a novel algorithm that finds shapelets in less time than current methods by an order of magnitude. Our algorithm is based on intelligent caching and reuse of computations, and the admissible pruning of the search space. Because our algorithm is so fast, it creates an opportunity to consider more expressive shapelet queries. In particular, we show for the first time an augmented shapelet representation that distinguishes the data based on conjunctions or disjunctions of shapelets. We call our novel representation Logical-Shapelets. We demonstrate the efficiency of our approach on the classic benchmark datasets used for these problems, and show several case studies where logical shapelets significantly outperform the original shapelet representation and other time series classification techniques. We demonstrate the utility of our ideas in domains as diverse as gesture recognition, robotics, and biometrics.", "title": "" } ]
[ { "docid": "df4fbaf83a761235c5d77654973b5eb1", "text": "We add to the discussion of how to assess the creativity of programs which generate artefacts such as poems, theorems, paintings, melodies, etc. To do so, we first review some existing frameworks for assessing artefact generation programs. Then, drawing on our experience of building both a mathematical discovery system and an automated painter, we argue that it is not appropriate to base the assessment of a system on its output alone, and that the way it produces artefacts also needs to be taken into account. We suggest a simple framework within which the behaviour of a program can be categorised and described which may add to the perception of creativity in the system.", "title": "" }, { "docid": "cb011c7e0d4d5f6d05e28c07ff02e18b", "text": "The legendary wealth in gold of ancient Egypt seems to correspond with an unexpected high number of gold production sites in the Eastern Desert of Egypt and Nubia. This contribution introduces briefly the general geology of these vast regions and discusses the geology of the different varieties of the primary gold occurrences (always related to auriferous quartz mineralization in veins or shear zones) as well as the variable physico-chemical genesis of the gold concentrations. The development of gold mining over time, from Predynastic (ca. 3000 BC) until the end of Arab gold production times (about 1350 AD), including the spectacular Pharaonic periods is outlined, with examples of its remaining artefacts, settlements and mining sites in remote regions of the Eastern Desert of Egypt and Nubia. Finally, some estimates on the scale of gold production are presented. 2002 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "af572a43542fde321e18675213f635ae", "text": "The representation of 3D pose plays a critical role for 3D action and gesture recognition. Rather than representing a 3D pose directly by its joint locations, in this paper, we propose a Deformable Pose Traversal Convolution Network that applies one-dimensional convolution to traverse the 3D pose for its representation. Instead of fixing the receptive field when performing traversal convolution, it optimizes the convolution kernel for each joint, by considering contextual joints with various weights. This deformable convolution better utilizes the contextual joints for action and gesture recognition and is more robust to noisy joints. Moreover, by feeding the learned pose feature to a LSTM, we perform end-to-end training that jointly optimizes 3D pose representation and temporal sequence recognition. Experiments on three benchmark datasets validate the competitive performance of our proposed method, as well as its efficiency and robustness to handle noisy joints of pose.", "title": "" }, { "docid": "c8d33f21915a6f1403f046ffa17b6e2e", "text": "Synthetic aperture radar (SAR) image segmentation is a difficult problem due to the presence of strong multiplicative noise. To attain multi-region segmentation for SAR images, this paper presents a parametric segmentation method based on the multi-texture model with level sets. Segmentation is achieved by solving level set functions obtained from minimizing the proposed energy functional. To fully utilize image information, edge feature and region information are both included in the energy functional. For the need of level set evolution, the ratio of exponentially weighted averages operator is modified to obtain edge feature. Region information is obtained by the improved edgeworth series expansion, which can adaptively model a SAR image distribution with respect to various kinds of regions. The performance of the proposed method is verified by three high resolution SAR images. The experimental results demonstrate that SAR images can be segmented into multiple regions accurately without any speckle pre-processing steps by the proposed method.", "title": "" }, { "docid": "8fa6defe08908c6ee6527d2e3a322a12", "text": "A new wide-band high-efficiency coplanar waveguide-fed printed loop antenna is presented for wireless communication systems in this paper. By adjusting geometrical parameters, the proposed antenna can easily achieve a wide bandwidth. To optimize the antenna performances, a parametric study was conducted with the aid of a commercial software, and based on the optimized geometry, a prototype was designed, fabricated, and tested. The simulated and measured results confirmed that the proposed antenna can operate at (1.68-2.68 GHz) band and at (1.46-2.6 GHz) band with bandwidth of 1 and 1.14 GHz, respectively. Moreover, the antenna has a nearly omnidirectional radiation pattern with a reasonable gain and high efficiency. Due to the above characteristics, the proposed antenna is very suitable for applications in PCS and IMT2000 systems.", "title": "" }, { "docid": "b1313b777c940445eb540b1e12fa559e", "text": "In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity.", "title": "" }, { "docid": "7f5ff39232cd491e648d40b070e0709c", "text": "Synthesizing terrain or adding detail to terrains manually is a long and tedious process. With procedural synthesis methods this process is faster but more difficult to control. This paper presents a new technique of terrain synthesis that uses an existing terrain to synthesize new terrain. To do this we use multi-resolution analysis to extract the high-resolution details from existing models and apply them to increase the resolution of terrain. Our synthesized terrains are more heterogeneous than procedural results, are superior to terrains created by texture transfer, and retain the large-scale characteristics of the original terrain.", "title": "" }, { "docid": "23ffdf5e7797e7f01c6d57f1e5546026", "text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.", "title": "" }, { "docid": "0eb28373a693593d6ca7c1bef34b3bde", "text": "Software development life cycle or SDLC for short is a methodology for designing, building, and maintaining information and industrial systems. So far, there exist many SDLC models, one of which is the Waterfall model which comprises five phases to be completed sequentially in order to develop a software solution. However, SDLC of software systems has always encountered problems and limitations that resulted in significant budget overruns, late or suspended deliveries, and dissatisfied clients. The major reason for these deficiencies is that project directors are not wisely assigning the required number of workers and resources on the various activities of the SDLC. Consequently, some SDLC phases with insufficient resources may be delayed; while, others with excess resources may be idled, leading to a bottleneck between the arrival and delivery of projects and to a failure in delivering an operational product on time and within budget. This paper proposes a simulation model for the Waterfall development process using the Simphony.NET simulation tool whose role is to assist project managers in determining how to achieve the maximum productivity with the minimum number of expenses, workers, and hours. It helps maximizing the utilization of development processes by keeping all employees and resources busy all the time to keep pace with the arrival of projects and to decrease waste and idle time. As future work, other SDLC models such as spiral and incremental are to be simulated, giving project executives the choice to use a diversity of software development methodologies.", "title": "" }, { "docid": "d06dc916942498014f9d00498c1d1d1f", "text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦", "title": "" }, { "docid": "4ce8934f295235acc2bbf03c7530842b", "text": "— Speech recognition has found its application on various aspects of our daily lives from automatic phone answering service to dictating text and issuing voice commands to computers. In this paper, we present the historical background and technological advances in speech recognition technology over the past few decades. More importantly, we present the steps involved in the design of a speaker-independent speech recognition system. We focus mainly on the pre-processing stage that extracts salient features of a speech signal and a technique called Dynamic Time Warping commonly used to compare the feature vectors of speech signals. These techniques are applied for recognition of isolated as well as connected words spoken. We conduct experiments on MATLAB to verify these techniques. Finally, we design a simple 'Voice-to-Text' converter application using MATLAB.", "title": "" }, { "docid": "8ca0edf4c51b0156c279fcbcb1941d2b", "text": "The good fossil record of trilobite exoskeletal anatomy and ontogeny, coupled with information on their nonbiomineralized tissues, permits analysis of how the trilobite body was organized and developed, and the various evolutionary modifications of such patterning within the group. In several respects trilobite development and form appears comparable with that which may have characterized the ancestor of most or all euarthropods, giving studies of trilobite body organization special relevance in the light of recent advances in the understanding of arthropod evolution and development. The Cambrian diversification of trilobites displayed modifications in the patterning of the trunk region comparable with those seen among the closest relatives of Trilobita. In contrast, the Ordovician diversification of trilobites, although contributing greatly to the overall diversity within the clade, did so within a narrower range of trunk conditions. Trilobite evolution is consistent with an increased premium on effective enrollment and protective strategies, and with an evolutionary trade-off between the flexibility to vary the number of trunk segments and the ability to regionalize portions of the trunk. 401 A nn u. R ev . E ar th P la ne t. Sc i. 20 07 .3 5: 40 143 4. D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U N IV E R SI T Y O F C A L IF O R N IA R IV E R SI D E L IB R A R Y o n 05 /0 2/ 07 . F or p er so na l u se o nl y. ANRV309-EA35-14 ARI 20 March 2007 15:54 Cephalon: the anteriormost or head division of the trilobite body composed of a set of conjoined segments whose identity is expressed axially Thorax: the central portion of the trilobite body containing freely articulating trunk segments Pygidium: the posterior tergite of the trilobite exoskeleton containing conjoined segments INTRODUCTION The rich record of the diversity and development of the trilobite exoskeleton (along with information on the geological occurrence, nonbiomineralized tissues, and associated trace fossils of trilobites) provides the best history of any Paleozoic arthropod group. The retention of features that may have characterized the most recent common ancestor of all living arthropods, which have been lost or obscured in most living forms, provides insights into the nature of the evolutionary radiation of the most diverse metazoan phylum alive today. Studies of phylogenetic stem-group taxa, of which Trilobita provide a prominent example, have special significance in the light of renewed interest in arthropod evolution prompted by comparative developmental genetics. Although we cannot hope to dissect the molecular controls operative within trilobites, the evolutionary developmental biology (evo-devo) approach permits a fresh perspective from which to examine the contributions that paleontology can make to evolutionary biology, which, in the context of the overall evolutionary history of Trilobita, is the subject of this review. TRILOBITES: BODY PLAN AND ONTOGENY Trilobites were a group of marine arthropods that appeared in the fossil record during the early Cambrian approximately 520 Ma and have not been reported from rocks younger than the close of the Permian, approximately 250 Ma. Roughly 15,000 species have been described to date, and although analysis of the occurrence of trilobite genera suggests that the known record is quite complete (Foote & Sepkoski 1999), many new species and genera continue to be established each year. The known diversity of trilobites results from their strongly biomineralized exoskeletons, made of two layers of low magnesium calcite, which was markedly more durable than the sclerites of most other arthropods. Because the exoskeleton was rich in morphological characters and was the only body structure preserved in the vast majority of specimens, skeletal form has figured prominently in the biological interpretation of trilobites.", "title": "" }, { "docid": "6e16d3e2fba39a5bf1d0fe234310405f", "text": "In cloud gaming the game is rendered on a distant cloud server and the resulting video stream is sent back to the user who controls the game via a thin client. The high resource usage of cloud gaming servers is a challenge. Expensive hardware including GPUs have to be efficiently shared among multiple simultaneous users. The cloud servers use virtualization techniques to isolate users and share resources among dedicated servers. The traditional virtualization techniques can however inflict notable performance overhead limiting the user count for a single server. Operating-system-level virtualization instances known as containers are an emerging trend in cloud computing. Containers don't need to virtualize the entire operating system still providing most of the benefits of virtualization. In this paper, we evaluate the container-based alternative to traditional virtualization in cloud gaming systems through extensive experiments. We also discuss the differences needed in system implementation using the container approach and identify the existing limitations.", "title": "" }, { "docid": "9bbf2a9f5afeaaa0f6ca12e86aef8e88", "text": "Phishing is a model problem for illustrating usability concerns of privacy and security because both system designers and attackers battle using user interfaces to guide (or misguide) users.We propose a new scheme, Dynamic Security Skins, that allows a remote web server to prove its identity in a way that is easy for a human user to verify and hard for an attacker to spoof. We describe the design of an extension to the Mozilla Firefox browser that implements this scheme.We present two novel interaction techniques to prevent spoofing. First, our browser extension provides a trusted window in the browser dedicated to username and password entry. We use a photographic image to create a trusted path between the user and this window to prevent spoofing of the window and of the text entry fields.Second, our scheme allows the remote server to generate a unique abstract image for each user and each transaction. This image creates a \"skin\" that automatically customizes the browser window or the user interface elements in the content of a remote web page. Our extension allows the user's browser to independently compute the image that it expects to receive from the server. To authenticate content from the server, the user can visually verify that the images match.We contrast our work with existing anti-phishing proposals. In contrast to other proposals, our scheme places a very low burden on the user in terms of effort, memory and time. To authenticate himself, the user has to recognize only one image and remember one low entropy password, no matter how many servers he wishes to interact with. To authenticate content from an authenticated server, the user only needs to perform one visual matching operation to compare two images. Furthermore, it places a high burden of effort on an attacker to spoof customized security indicators.", "title": "" }, { "docid": "729b29b5ab44102541f3ebf8d24efec3", "text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.", "title": "" }, { "docid": "6eda76a015e8cb9122ed89b491474248", "text": "Beauty treatment for skin requires a high-intensity focused ultrasound (HIFU) transducer to generate coagulative necrosis in a small focal volume (e.g., 1 mm³) placed at a shallow depth (3-4.5 mm from the skin surface). For this, it is desirable to make the F-number as small as possible under the largest possible aperture in order to generate ultrasound energy high enough to induce tissue coagulation in such a small focal volume. However, satisfying both conditions at the same time is demanding. To meet the requirements, this paper, therefore, proposes a double-focusing technique, in which the aperture of an ultrasound transducer is spherically shaped for initial focusing and an acoustic lens is used to finally focus ultrasound on a target depth of treatment; it is possible to achieve the F-number of unity or less while keeping the aperture of a transducer as large as possible. In accordance with the proposed method, we designed and fabricated a 7-MHz double-focused ultrasound transducer. The experimental results demonstrated that the fabricated double-focused transducer had a focal length of 10.2 mm reduced from an initial focal length of 15.2 mm and, thus, the F-number changed from 1.52 to 1.02. Based on the results, we concluded that the proposed double-focusing method is suitable to decrease F-number while maintaining a large aperture size.", "title": "" }, { "docid": "d1c69dac07439ade32a962134753ab08", "text": "The change history of a software project contains a rich collection of code changes that record previous development experience. Changes that fix bugs are especially interesting, since they record both the old buggy code and the new fixed code. This paper presents a bug finding algorithm using bug fix memories: a project-specific bug and fix knowledge base developed by analyzing the history of bug fixes. A bug finding tool, BugMem, implements the algorithm. The approach is different from bug finding tools based on theorem proving or static model checking such as Bandera, ESC/Java, FindBugs, JLint, and PMD. Since these tools use pre-defined common bug patterns to find bugs, they do not aim to identify project-specific bugs. Bug fix memories use a learning process, so the bug patterns are project-specific, and project-specific bugs can be detected. The algorithm and tool are assessed by evaluating if real bugs and fixes in project histories can be found in the bug fix memories. Analysis of five open source projects shows that, for these projects, 19.3%-40.3% of bugs appear repeatedly in the memories, and 7.9%-15.5% of bug and fix pairs are found in memories. The results demonstrate that project-specific bug fix patterns occur frequently enough to be useful as a bug detection technique. Furthermore, for the bug and fix pairs, it is possible to both detect the bug and provide a strong suggestion for the fix. However, there is also a high false positive rate, with 20.8%-32.5% of non-bug containing changes also having patterns found in the memories. A comparison of BugMem with a bug finding tool, PMD, shows that the bug sets identified by both tools are mostly exclusive, indicating that BugMem complements other bug finding tools.", "title": "" }, { "docid": "df5aaa0492fc07b76eb7f8da97ebf08e", "text": "The aim of the present case report is to describe the orthodontic-surgical treatment of a 17-year-and-9-month-old female patient with a Class III malocclusion, poor facial esthetics, and mandibular and chin protrusion. She had significant anteroposterior and transverse discrepancies, a concave profile, and strained lip closure. Intraorally, she had a negative overjet of 5 mm and an overbite of 5 mm. The treatment objectives were to correct the malocclusion, and facial esthetic and also return the correct function. The surgical procedures included a Le Fort I osteotomy for expansion, advancement, impaction, and rotation of the maxilla to correct the occlusal plane inclination. There was 2 mm of impaction of the anterior portion of the maxilla and 5 mm of extrusion in the posterior region. A bilateral sagittal split osteotomy was performed in order to allow counterclockwise rotation of the mandible and anterior projection of the chin, accompanying the maxillary occlusal plane. Rigid internal fixation was used without any intermaxillary fixation. It was concluded that these procedures were very effective in producing a pleasing facial esthetic result, showing stability 7 years posttreatment.", "title": "" }, { "docid": "6ae5f96cd14df30e7ac5cc6b654823df", "text": "A succession of doctrines for enhancing cybersecurity has been advocated in the past, including prevention, risk management, and deterrence through accountability. None has proved effective. Proposals that are now being made view cybersecurity as a public good and adopt mechanisms inspired by those used for public health. This essay discusses the failings of previous doctrines and surveys the landscape of cybersecurity through the lens that a new doctrine, public cybersecurity, provides.", "title": "" }, { "docid": "867b4cb932ad3ec3ec69cdc831d81cc8", "text": "This paper reviews the some of significant works on infant cry signal analysis proposed in the past two decades and reviews the recent progress in this field. The cry of baby cannot be predicted accurately where it is very hard to identify for what it cries for. Experienced parents and specialists in the area of child care such as pediatrician and pediatric nurse can distinguish different sort of cries by just making use their individual perception on auditory sense. This is totally subjective evaluation and not suitable for clinical use. Non-invasive method has been widely used in infant cry signal analysis and has shown very promising results. Various feature extraction and classification algorithms used in infant cry analysis are briefly described. This review gives an insight on the current state of the art works in infant cry signal analysis and concludes with thoughts about the future directions for better representation and interpretation of infant cry signals.", "title": "" } ]
scidocsrr
090383e63402a75b42eb80a6456f6689
Semi-supervised learning approach for Indonesian Named Entity Recognition (NER) using co-training algorithm
[ { "docid": "70e6148316bd8915afd8d0908fb5ab0d", "text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section", "title": "" }, { "docid": "89aa60cefe11758e539f45c5cba6f48a", "text": "For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material. Supplements: Click on the \"Resources\" tab to View Downloadable Files:Solutions Power Point Lecture Slides Chapters 1-5, 8-10, 12-13 and 24 Now Available! For additional resourcse visit the author website: http://www.cs.colorado.edu/~martin/slp.html", "title": "" } ]
[ { "docid": "cb1bfa58eb89539663be0f2b4ea8e64d", "text": "Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a ‘good’ hierarchical clustering is one that minimizes a particular cost function [21]. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining ‘good’ objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a ‘natural’ ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, and design algorithms for this scenario.", "title": "" }, { "docid": "f405c62d932eec05c55855eb13ba804c", "text": "Multilevel converters have been under research and development for more than three decades and have found successful industrial application. However, this is still a technology under development, and many new contributions and new commercial topologies have been reported in the last few years. The aim of this paper is to group and review these recent contributions, in order to establish the current state of the art and trends of the technology, to provide readers with a comprehensive and insightful review of where multilevel converter technology stands and is heading. This paper first presents a brief overview of well-established multilevel converters strongly oriented to their current state in industrial applications to then center the discussion on the new converters that have made their way into the industry. In addition, new promising topologies are discussed. Recent advances made in modulation and control of multilevel converters are also addressed. A great part of this paper is devoted to show nontraditional applications powered by multilevel converters and how multilevel converters are becoming an enabling technology in many industrial sectors. Finally, some future trends and challenges in the further development of this technology are discussed to motivate future contributions that address open problems and explore new possibilities.", "title": "" }, { "docid": "967df203ea4a9f1ac90bb7f6bb498b6e", "text": "Traditional quantum error-correcting codes are designed for the depolarizing channel modeled by generalized Pauli errors occurring with equal probability. Amplitude damping channels model, in general, the decay process of a multilevel atom or energy dissipation of a bosonic system with Markovian bath at zero temperature. We discuss quantum error-correcting codes adapted to amplitude damping channels for higher dimensional systems (qudits). For multi-level atoms, we consider a natural kind of decay process, and for bosonic systems, we consider the qudit amplitude damping channel obtained by truncating the Fock basis of the bosonic modes (e.g., the number of photons) to a certain maximum occupation number. We construct families of single-error-correcting quantum codes that can be used for both cases. Our codes have larger code dimensions than the previously known single-error-correcting codes of the same lengths. In addition, we present families of multi-error correcting codes for these two channels, as well as generalizations of our construction technique to error-correcting codes for the qutrit <inline-formula> <tex-math notation=\"LaTeX\">$V$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$\\Lambda $ </tex-math></inline-formula> channels.", "title": "" }, { "docid": "c2659be74498ec68c3eb5509ae11b3c3", "text": "We focus on modeling human activities comprising multiple actions in a completely unsupervised setting. Our model learns the high-level action co-occurrence and temporal relations between the actions in the activity video. We consider the video as a sequence of short-term action clips, called action-words, and an activity is about a set of action-topics indicating which actions are present in the video. Then we propose a new probabilistic model relating the action-words and the action-topics. It allows us to model long-range action relations that commonly exist in the complex activity, which is challenging to capture in the previous works. We apply our model to unsupervised action segmentation and recognition, and also to a novel application that detects forgotten actions, which we call action patching. For evaluation, we also contribute a new challenging RGB-D activity video dataset recorded by the new Kinect v2, which contains several human daily activities as compositions of multiple actions interacted with different objects. The extensive experiments show the effectiveness of our model.", "title": "" }, { "docid": "771611dc99e22b054b936fce49aea7fc", "text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.", "title": "" }, { "docid": "f63866fcb11eae78b5095e8f7d21cf8a", "text": "H.264/MPEG4-AVC is the latest video coding standard of the ITU-T video coding experts group (VCEG) and the ISO/IEC moving picture experts group (MPEG). H.264/MPEG4-AVC has recently become the most widely accepted video coding standard since the deployment of MPEG2 at the dawn of digital television, and it may soon overtake MPEG2 in common use. It covers all common video applications ranging from mobile services and videoconferencing to IPTV, HDTV, and HD video storage. This article discusses the technology behind the new H.264/MPEG4-AVC standard, focusing on the main distinct features of its core coding technology and its first set of extensions, known as the fidelity range extensions (FRExt). In addition, this article also discusses the current status of adoption and deployment of the new standard in various application areas", "title": "" }, { "docid": "8c6514a40f1c4ef55cb34336be9b968a", "text": "This survey (N1⁄4 224) found that characteristics collectively known as the Dark Triad (i.e. narcissism, psychopathy and Machiavellianism) were correlated with various dimensions of short-term mating but not long-term mating. The link between the Dark Triad and shortterm mating was stronger for men than for women. The Dark Triad partially mediated the sex difference in short-term mating behaviour. Findings are consistent with a view that the Dark Triad facilitates an exploitative, short-term mating strategy in men. Possible implications, including that Dark Triad traits represent a bundle of individual differences that promote a reproductively adaptive strategy are discussed. Findings are discussed in the broad context of how an evolutionary approach to personality psychology can enhance our understanding of individual differences. Copyright # 2008 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "418de962446199744b4ced735c506d41", "text": "In this paper, a stereo matching algorithm based on image segments is presented. We propose the hybrid segmentation algorithm that is based on a combination of the Belief Propagation and Mean Shift algorithms with aim to refine the disparity and depth map by using a stereo pair of images. This algorithm utilizes image filtering and modified SAD (Sum of Absolute Differences) stereo matching method. Firstly, a color based segmentation method is applied for segmenting the left image of the input stereo pair (reference image) into regions. The aim of the segmentation is to simplify representation of the image into the form that is easier to analyze and is able to locate objects in images. Secondly, results of the segmentation are used as an input of the local window-based matching method to determine the disparity estimate of each image pixel. The obtained experimental results demonstrate that the final depth map can be obtained by application of segment disparities to the original images. Experimental results with the stereo testing images show that our proposed Hybrid algorithm HSAD gives a good performance.", "title": "" }, { "docid": "79f1473d4eb0c456660543fda3a648f1", "text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.", "title": "" }, { "docid": "3db8dc56e573488c5085bf5a61ea0d7f", "text": "This paper proposes new approximate coloring and other related techniques which markedly improve the run time of the branchand-bound algorithm MCR (J. Global Optim., 37, 95–111, 2007), previously shown to be the fastest maximum-clique-finding algorithm for a large number of graphs. The algorithm obtained by introducing these new techniques in MCR is named MCS. It is shown that MCS is successful in reducing the search space quite efficiently with low overhead. Consequently, it is shown by extensive computational experiments that MCS is remarkably faster than MCR and other existing algorithms. It is faster than the other algorithms by an order of magnitude for several graphs. In particular, it is faster than MCR for difficult graphs of very high density and for very large and sparse graphs, even though MCS is not designed for any particular type of graphs. MCS can be faster than MCR by a factor of more than 100,000 for some extremely dense random graphs.", "title": "" }, { "docid": "f34af647319436085ab8e667bab795b0", "text": "In the transition from industrial to service robotics, robo ts will have to deal with increasingly unpredictable and variable environments. We present a system that is able to recognize objects of a certain class in an image and to identify their parts for potential interactions. The metho d can recognize objects from arbitrary viewpoints and generalizes to instances that have never been observed during training, even if they are partially occluded and appear against cluttered backgrounds. Our approach builds on the Implicit Shape Model of Leibe et al. (2008). We extend it to couple recognition to the provision of meta-data useful for a task and to the case of multiple viewpoints by integrating it with the dense multi-view correspondence finder of Ferrari et al. (2006). Meta-data can be part labels but also depth estimates, information on material types, or any other pixelwise annotation. We present experimental results on wheelchairs, cars, and motorbikes.", "title": "" }, { "docid": "da3201add57485d574c71c6fa95fc28c", "text": "Two experiments (modeled after J. Deese's 1959 study) revealed remarkable levels of false recall and false recognition in a list learning paradigm. In Experiment 1, subjects studied lists of 12 words (e.g., bed, rest, awake); each list was composed of associates of 1 nonpresented word (e.g., sleep). On immediate free recall tests, the nonpresented associates were recalled 40% of the time and were later recognized with high confidence. In Experiment 2, a false recall rate of 55% was obtained with an expanded set of lists, and on a later recognition test, subjects produced false alarms to these items at a rate comparable to the hit rate. The act of recall enhanced later remembering of both studied and nonstudied material. The results reveal a powerful illusion of memory: People remember events that never happened.", "title": "" }, { "docid": "078578f356cb7946e3956c571bef06ee", "text": "Background: Dysphagia is common and costly. The ability of patient symptoms to predict objective swallowing dysfunction is uncertain. Purpose: This study aimed to evaluate the ability of the Eating Assessment Tool (EAT-10) to screen for aspiration risk in patients with dysphagia. Methods: Data from individuals with dysphagia undergoing a videofluoroscopic swallow study between January 2012 and July 2013 were abstracted from a clinical database. Data included the EAT-10, Penetration Aspiration Scale (PAS), total pharyngeal transit (TPT) time, and underlying diagnoses. Bivariate linear correlation analysis, sensitivity, specificity, and predictive values were calculated. Results: The mean age of the entire cohort (N = 360) was 64.40 (± 14.75) years. Forty-six percent were female. The mean EAT-10 was 16.08 (± 10.25) for nonaspirators and 23.16 (± 10.88) for aspirators (P < .0001). There was a linear correlation between the total EAT-10 score and the PAS (r = 0.273, P < .001). Sensitivity and specificity of an EAT-10 > 15 in predicting aspiration were 71% and 53%, respectively. Conclusion: Subjective dysphagia symptoms as documented with the EAT-10 can predict aspiration risk. A linear correlation exists between the EAT-10 and aspiration events (PAS) and aspiration risk (TPT time). Persons with an EAT10 > 15 are 2.2 times more likely to aspirate (95% confidence interval, 1.3907-3.6245). The sensitivity of an EAT-10 > 15 is 71%.", "title": "" }, { "docid": "2bb21a94c803c74ad6c286c7a04b8c5b", "text": "Recently, social media, such as Twitter, has been successfully used as a proxy to gauge the impacts of disasters in real time. However, most previous analyses of social media during disaster response focus on the magnitude and location of social media discussion. In this work, we explore the impact that disasters have on the underlying sentiment of social media streams. During disasters, people may assume negative sentiments discussing lives lost and property damage, other people may assume encouraging responses to inspire and spread hope. Our goal is to explore the underlying trends in positive and negative sentiment with respect to disasters and geographically related sentiment. In this paper, we propose a novel visual analytics framework for sentiment visualization of geo-located Twitter data. The proposed framework consists of two components, sentiment modeling and geographic visualization. In particular, we provide an entropy-based metric to model sentiment contained in social media data. The extracted sentiment is further integrated into a visualization framework to explore the uncertainty of public opinion. We explored Ebola Twitter dataset to show how visual analytics techniques and sentiment modeling can reveal interesting patterns in disaster scenarios.", "title": "" }, { "docid": "ce650daedc7ba277d245a2150062775f", "text": "Amongst the large number of write-and-throw-away-spreadsheets developed for one-time use there is a rather neglected proportion of spreadsheets that are huge, periodically used, and submitted to regular update-cycles like any conventionally evolving valuable legacy application software. However, due to the very nature of spreadsheets, their evolution is particularly tricky and therefore error-prone. In our strive to develop tools and methodologies to improve spreadsheet quality, we analysed consolidation spreadsheets of an internationally operating company for the errors they contain. The paper presents the results of the field audit, involving 78 spreadsheets with 60,446 non-empty cells. As a by-product, the study performed was also to validate our analysis tools in an industrial context. The evaluated auditing tool offers the auditor a new view on the formula structure of the spreadsheet by grouping similar formulas into equivalence classes. Our auditing approach defines three similarity criteria between formulae, namely copy, logical and structural equivalence. To improve the visualization of large spreadsheets, equivalences and data dependencies are displayed in separated windows that are interlinked with the spreadsheet. The auditing approach helps to find irregularities in the geometrical pattern of similar formulas.", "title": "" }, { "docid": "867bd0c5f0760715bdfdaeea1290c72f", "text": "In this paper, we propose a real-time lane detection algorithm based on a hyperbola-pair lane boundary model and an improved RANSAC paradigm. Instead of modeling each road boundary separately, we propose a model to describe the road boundary as a pair of parallel hyperbolas on the ground plane. A fuzzy measurement is introduced into the RANSAC paradigm to improve the accuracy and robustness of fitting the points on the boundaries into the model. Our method is able to deal with existence of partial occlusion, other traffic participants and markings, etc. Experiment in many different conditions, including various conditions of illumination, weather and road, demonstrates its high performance and accuracy", "title": "" }, { "docid": "3810c6b33a895730bc57fdc658d3f72e", "text": "Comics have been shown to be able to tell a story by guiding the viewers gaze patterns through a sequence of images. However, not much research has been done on how comic techniques affect these patterns. We focused this study to investigate the effect that the structure of a comics panels have on the viewers reading patterns, specifically with the time spent reading the comic and the number of times the viewer fixates on a point. We use two versions of a short comic as a stimulus, one version with four long panels and another with sixteen smaller panels. We collected data using the GazePoint eye tracker, focusing on viewing time and number of fixations, and we collected subjective information about the viewers preferences using a questionnaire. We found that no significant effect between panel structure and viewing time or number of fixations, but those viewers slightly tended to prefer the format of four long panels.", "title": "" }, { "docid": "9e3bba7a681a838fb0b32c1e06eaae93", "text": "This review focuses on the synthesis, protection, functionalization, and application of magnetic nanoparticles, as well as the magnetic properties of nanostructured systems. Substantial progress in the size and shape control of magnetic nanoparticles has been made by developing methods such as co-precipitation, thermal decomposition and/or reduction, micelle synthesis, and hydrothermal synthesis. A major challenge still is protection against corrosion, and therefore suitable protection strategies will be emphasized, for example, surfactant/polymer coating, silica coating and carbon coating of magnetic nanoparticles or embedding them in a matrix/support. Properly protected magnetic nanoparticles can be used as building blocks for the fabrication of various functional systems, and their application in catalysis and biotechnology will be briefly reviewed. Finally, some future trends and perspectives in these research areas will be outlined.", "title": "" }, { "docid": "474134af25f1a5cd95b3bc29b3df8be4", "text": "The challenge of combatting malware designed to breach air-gap isolation in order to leak data.", "title": "" }, { "docid": "e3f1ad001f0fc8a3944e5b35fd085a42", "text": "In recent years, training image segmentation networks often needs fine-tuning the model which comes from the initial training upon large-scale classification datasets like ImageNet. Such fine-tuning methods are confronted with three problems: (1) domain gap. (2) mismatch between data size and model size. (3) poor controllability. A more practical solution is to train the segmentation model from scratch, which motivates our Dense In Dense (DID) network. In DID, we put forward an efficient architecture based on DenseNet to further accelerate the information flow inside and outside the dense block. Deep supervision also applies to a progressive upsampling rather than the traditional straightforward upsampling. Our DID Network performs favorably on Camvid dataset, Inria Aerial Image Labeling dataset and Cityscapes by training from scratch with less parameters.", "title": "" } ]
scidocsrr
fd302182a0cfdfdb5efdbe8e0d2473c6
A Joint Segmentation and Classification Framework for Sentence Level Sentiment Classification
[ { "docid": "6081f8b819133d40522a4698d4212dfc", "text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.", "title": "" }, { "docid": "d5b986cf02b3f9b01e5307467c1faec2", "text": "Most sentiment analysis approaches use as baseline a support vector machines (SVM) classifier with binary unigram weights. In this paper, we explore whether more sophisticated feature weighting schemes from Information Retrieval can enhance classification accuracy. We show that variants of the classictf.idf scheme adapted to sentiment analysis provide significant increases in accuracy, especially when using a sublinear function for term frequency weights and document frequency smoothing. The techniques are tested on a wide selection of data sets and produce the best accuracy to our knowledge.", "title": "" }, { "docid": "cf3804e332e9bec1120261f9e4f98da8", "text": "We propose Bilingually-constrained Recursive Auto-encoders (BRAE) to learn semantic phrase embeddings (compact vector representations for phrases), which can distinguish the phrases with different semantic meanings. The BRAE is trained in a way that minimizes the semantic distance of translation equivalents and maximizes the semantic distance of nontranslation pairs simultaneously. After training, the model learns how to embed each phrase semantically in two languages and also learns how to transform semantic embedding space in one language to the other. We evaluate our proposed method on two end-to-end SMT tasks (phrase table pruning and decoding with phrasal semantic similarities) which need to measure semantic similarity between a source phrase and its translation candidates. Extensive experiments show that the BRAE is remarkably effective in these two tasks.", "title": "" } ]
[ { "docid": "81476f837dd763301ba065ac78c5bb65", "text": "Background: The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with metaregression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. Methods: A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. Results: The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. Conclusions: This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. Level of evidence: IV. © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fc9eb12afb2c86005ae4f06835feb6cc", "text": "Peer pressure is a reoccurring phenomenon in criminal or deviant behaviour especially, as it pertains to adolescents. It may begin in early childhood of about 5years and increase through childhood to become more intense in adolescence years. This paper examines how peer pressure is present in adolescents and how it may influence or create the leverage to non-conformity to societal norms and laws. The paper analyses the process and occurrence of peer influence and pressure on individuals and groups within the framework of the social learning and the social control theories. Major features of the peer pressure process are identified as group dynamics, delinquent peer subculture, peer approval of delinquent behaviour and sanctions for non-conformity which include ridicule, mockery, ostracism and even mayhem or assault in some cases. Also, the paper highlights acceptance and rejection as key concepts that determine the sway or gladiation of adolescents to deviant and criminal behaviour. Finally, it concludes that peer pressure exists for conformity and in delinquent subculture, the result is conformity to criminal codes and behaviour. The paper recommends more urgent, serious and offensive grass root approaches by governments and institutions against this growing threat to the continued peace, orderliness and development of society.", "title": "" }, { "docid": "70a9aa97fc51452fb87288c86d0299d6", "text": "The germline precursor to the ferrochelatase antibody 7G12 was found to bind the polyether jeffamine in addition to its cognate hapten N-methylmesoporphyrin. A comparison of the X-ray crystal structures of the ligand-free germline Fab and its complex with either hapten or jeffamine reveals that the germline antibody undergoes significant conformational changes upon the binding of these two structurally distinct ligands, which lead to increased antibody-ligand complementarity. The five somatic mutations introduced during affinity maturation lead to enhanced binding affinity for hapten and a loss in affinity for jeffamine. Moreover, a comparison of the crystal structures of the germline and affinity-matured antibodies reveals that somatic mutations not only fix the optimal binding site conformation for the hapten, but also introduce interactions that interfere with the binding of non-hapten molecules. The structural plasticity of this germline antibody and the structural effects of the somatic mutations that result in enhanced affinity and specificity for hapten likely represent general mechanisms used by the immune response, and perhaps primitive proteins, to evolve high affinity, selective receptors for so many distinct chemical structures.", "title": "" }, { "docid": "6d589aaae8107bf6b71c0f06f7a49a28", "text": "1. INTRODUCTION The explosion of digital connectivity, the significant improvements in communication and information technologies and the enforced global competition are revolutionizing the way business is performed and the way organizations compete. A new, complex and rapidly changing economic order has emerged based on disruptive innovation, discontinuities, abrupt and seditious change. In this new landscape, knowledge constitutes the most important factor, while learning, which emerges through cooperation, together with the increased reliability and trust, is the most important process (Lundvall and Johnson, 1994). The competitive survival and ongoing sustenance of an organisation primarily depend on its ability to redefine and adopt continuously goals, purposes and its way of doing things (Malhotra, 2001). These trends suggest that private and public organizations have to reinvent themselves through 'continuous non-linear innovation' in order to sustain themselves and achieve strategic competitive advantage. The extant literature highlights the great potential of ICT tools for operational efficiency, cost reduction, quality of services, convenience, innovation and learning in private and public sectors. However, scholarly investigations have focused primarily on the effects and outcomes of ICTs (Information & Communication Technology) for the private sector. The public sector has been sidelined because it tends to lag behind in the process of technology adoption and business reinvention. Only recently has the public sector come to recognize the potential importance of ICT and e-business models as a means of improving the quality and responsiveness of the services they provide to their citizens, expanding the reach and accessibility of their services and public infrastructure and allowing citizens to experience a faster and more transparent form of access to government services. The initiatives of government agencies and departments to use ICT tools and applications, Internet and mobile devices to support good governance, strengthen existing relationships and build new partnerships within civil society, are known as eGovernment initiatives. As with e-commerce, eGovernment represents the introduction of a great wave of technological innovation as well as government reinvention. It represents a tremendous impetus to move forward in the 21 st century with higher quality, cost effective government services and a better relationship between citizens and government (Fang, 2002). Many government agencies in developed countries have taken progressive steps toward the web and ICT use, adding coherence to all local activities on the Internet, widening local access and skills, opening up interactive services for local debates, and increasing the participation of citizens on promotion and management …", "title": "" }, { "docid": "409baee7edaec587727624192eab93aa", "text": "It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory.", "title": "" }, { "docid": "1b6ddffacc50ad0f7e07675cfe12c282", "text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.", "title": "" }, { "docid": "eced59d8ec159f3127e7d2aeca76da96", "text": "Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face to face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches, such as handheld devices with composited graphics or see-through head worn displays, is that users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction. We detail our prototype system and a number of interactive experiences. We present an initial user experiment that shows that participants are able to deduce the size and distance of a virtual projected object. A second experiment shows that participants are able to infer which of a number of targets the other user indicates by pointing.", "title": "" }, { "docid": "dae63c2eb42acf7c5aa75948169abbbf", "text": "This paper introduces a local planner which computes a set of commands, allowing an autonomous vehicle to follow a given trajectory. To do so, the platform relies on a localization system, a map and a cost map which represents the obstacles in the environment. The presented method computes a set of tentative trajectories, using a schema based on a Frenet frame obtained from the global planner. These trajectories are then scored using a linear combination of weighted cost functions. In the presented approach, new weights are introduced in order to satisfy the specificities of our autonomous platform, Verdino. A study on the influence of the defined weights in the final behavior of the vehicle is introduced. From these tests, several configurations have been chosen and ranked according to two different proposed behaviors. The method has been tested both in simulation and in real conditions.", "title": "" }, { "docid": "13f24b04e37c9e965d85d92e2c588c9a", "text": "In this paper we propose a new user purchase preference model based on their implicit feedback behavior. We analyze user behavior data to seek their purchase preference signals. We find that if a user has more purchase preference on a certain item he would tend to browse it for more times. It gives us an important inspiration that, not only purchasing behavior but also other types of implicit feedback like browsing behavior, can indicate user purchase preference. We further find that user purchase preference signals also exist in the browsing behavior of item categories. Therefore, when we want to predict user purchase preference for certain items, we can integrate these behavior types into our user preference model by converting such preference signals into numerical values. We evaluate our model on a real-world dataset from a shopping site in China. Results further validate that user purchase preference model in our paper can capture more and accurate user purchase preference information from implicit feedback and greatly improves the performance of user purchase prediction.", "title": "" }, { "docid": "2b42cf158d38153463514ed7bc00e25f", "text": "The Disney Corporation made their first princess film in 1937 and has continued producing these movies. Over the years, Disney has received criticism for their gender interpretations and lack of racial diversity. This study will examine princess films from the 1990’s and 2000’s and decide whether race or time has an effect on the gender role portrayal of each character. By using a content analysis, this study identified the changes with each princess. The findings do suggest the princess characters exhibited more egalitarian behaviors over time. 1 The Disney Princess franchise began in 1937 with Snow White and the Seven Dwarfs and continues with the most recent film was Tangled (Rapunzel) in 2011. In past years, Disney film makers were criticized by the public audience for lack of ethnic diversity. In 1995, Disney introduced Pocahontas and three years later Mulan emerged creating racial diversity to the collection. Eleven years later, Disney released The Princess and the Frog (2009). The ongoing question is whether diverse princesses maintain the same qualities as their European counterparts. Walt Disney’s legacy lives on, but viewers are still curious about the all white princess collection which did not gain racial counterparts until 58 years later. It is important to recognize the role the Disney Corporation plays in today’s society. The company has several princesses’ films with matching merchandise. Parents purchase the items for their children and through film and merchandise, children are receiving messages such as how a woman ought to act, think or dress. Gender construction in Disney princess films remains important because of the messages it sends to children. We need to know whether gender roles presented in the films downplay the intellect of a woman in a modern society or whether Disney princesses are constricted to the female gender roles such as submissiveness and nurturing. In addition, we need to consider whether the messages are different for diverse princesses. The purpose of the study is to investigate the changes in gender construction in Disney princess characters related to the race of the character. This research also examines how gender construction of Disney princess characters changed from the 1900’s to 2000’s. A comparative content analysis will analyze gender role differences between women of color and white princesses. In particular, the study will ask whether race does matter in the gender roles revealed among each female character. By using social construction perspectives, Disney princesses of color were more masculine, but the most recent films became more egalitarian. 2 LITERATURE REVIEW Women in Disney film Davis (2006) examined women in Disney animated films by creating three categories: The Classic Years, The Middle Era, and The Eisner Era. The Classic Years, 19371967 were described as the beginning of Disney. During this period, women were rarely featured alone in films, but held central roles in the mid-1930s (Davis 2006:84). Three princess films were released and the characters carried out traditional feminine roles such as domestic work and passivity. Davis (2006) argued the princesses during The Classic Era were the least active and dynamic. The Middle Era, 1967-1988, led to a downward spiral for the company after the deaths of Walt and Roy Disney. The company faced increased amounts of debt and only eight Disney films were produced. The representation of women remained largely static (Davis 2006:137). The Eisner Era, 1989-2005, represented a revitalization of Disney with the release of 12 films with leading female roles. Based on the eras, Davis argued there was a shift after Walt Disney’s death which allowed more women in leading roles and released them from traditional gender roles. Independence was a new theme in this era allowing women to be selfsufficient unlike women in The Classic Era who relied on male heroines. Gender Role Portrayal in films England, Descartes, and Meek (2011) examined the Disney princess films and challenged the ideal of traditional gender roles among the prince and princess characters. The study consisted of all nine princess films divided into three categories based on their debut: early, middle and most current. The researchers tested three hypotheses: 1) gender roles among males and female characters would differ, 2) males would rescue or attempt to rescue the princess, and 3) characters would display more egalitarian behaviors over time (England, et al. 2011:557-58). The researchers coded traits as masculine and feminine. They concluded that princesses 3 displayed a mixture of masculine and feminine characteristics. These behaviors implied women are androgynous beings. For example, princesses portrayed bravery almost twice as much as princes (England, et al. 2011). The findings also showed males rescued women more and that women were rarely shown as rescuers. Overall, the data indicated Disney princess films had changed over time as women exhibited more masculine behaviors in more recent films. Choueiti, Granados, Pieper, and Smith (2010) conducted a content analysis regarding gender roles in top grossing Grated films. The researchers considered the following questions: 1) What is the male to female ratio? 2) Is gender related to the presentation of the character demographics such as role, type, or age? and 3) Is gender related to the presentation of character’s likeability, and the equal distribution of male and females from 1990-2005(Choueiti et al. 2010:776-77). The researchers concluded that there were more male characters suggesting the films were patriarchal. However, there was no correlation with demographics of the character and males being viewed as more likeable. Lastly, female representation has slightly decreased from 214 characters or 30.1% in 1990-94 to 281 characters or 29.4% in 2000-2004 (Choueiti et al. 2010:783). From examining gender role portrayals, females have become androgynous while maintaining minimal roles in animated film.", "title": "" }, { "docid": "2fbc75f848a0a3ae8228b5c6cbe76ec4", "text": "The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed. The relationships of goal setting to other theories are described as are the theory's limitations.", "title": "" }, { "docid": "9780c2d63739b8bf4f5c48f12014f605", "text": "It has been hypothesized that unexplained infertility may be related to specific personality and coping styles. We studied two groups of women with explained infertility (EIF, n = 63) and unexplained infertility (UIF, n = 42) undergoing an in vitro fertilization (IVF) cycle. Women completed personality and coping style questionnaires prior to the onset of the cycle, and state depression and anxiety scales before and at two additional time points during the cycle. Almost no in-between group differences were found at any of the measured time points in regards to the Minnesota Multiphasic Personality Inventory-2 validity and clinical scales, Illness Cognitions and Life Orientation Test, or for the situational measures. The few differences found suggest a more adaptive, better coping, and functioning defensive system in women with EIF. In conclusion, we did not find any clinically significant personality differences or differences in depression or anxiety levels between women with EIF and UIF during an IVF cycle. Minor differences found are probably a reaction to the ambiguous medical situation with its uncertain prognosis, amplifying certain traits which are not specific to one psychological structure but rather to the common experience shared by the group. The results of this study do not support the possibility that personality traits are involved in the pathophysiology of unexplained infertility.", "title": "" }, { "docid": "c25d877f23f874a5ced7548998ec8157", "text": "The paper presents a Neural Network model for modeling academic profile of students. The proposed model allows prediction of students’ academic performance based on some of their qualitative observations. Classifying and predicting students’ academic performance using arithmetical and statistical techniques may not necessarily offer the best way to evaluate human acquisition of knowledge and skills, but a hybridized fuzzy neural network model successfully handles reasoning with imprecise information, and enables representation of student modeling in the linguistic form the same way the human teachers do. The model is designed, developed and tested in MATLAB and JAVA which considers factors like age, gender, education, past performance, work status, study environment etc. for performance prediction of students. A Fuzzy Probabilistic Neural Network model has been proposed which enables the design of an easy-to-use, personalized student performance prediction component. The results of experiments show that the model outperforms traditional back-propagation neural networks as well as statistical models. It is also found to be a useful tool in predicting the performance of students belonging to any stream. The model may provide dual advantage to the educational institutions; first by helping teachers to amend their teaching methodology based on the level of students thereby improving students’ performances and secondly classifying the likely successful and unsuccessful students.", "title": "" }, { "docid": "02750b69e72daf7f82cb57e1f7f228bf", "text": "An advanced, simple to use, detrending method to be used before heart rate variability analysis (HRV) is presented. The method is based on smoothness priors approach and operates like a time-varying finite-impulse response high-pass filter. The effect of the detrending on time- and frequency-domain analysis of HRV is studied.", "title": "" }, { "docid": "93a283324fed31e4ecf81d62acae583a", "text": "The success of the state-of-the-art deblurring methods mainly depends on the restoration of sharp edges in a coarse-to-fine kernel estimation process. In this paper, we propose to learn a deep convolutional neural network for extracting sharp edges from blurred images. Motivated by the success of the existing filtering-based deblurring methods, the proposed model consists of two stages: suppressing extraneous details and enhancing sharp edges. We show that the two-stage model simplifies the learning process and effectively restores sharp edges. Facilitated by the learned sharp edges, the proposed deblurring algorithm does not require any coarse-to-fine strategy or edge selection, thereby significantly simplifying kernel estimation and reducing computation load. Extensive experimental results on challenging blurry images demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of visual quality and run-time.", "title": "" }, { "docid": "c688d24fd8362a16a19f830260386775", "text": "We present a fast iterative algorithm for identifying the Support Vectors of a given set of points. Our algorithm works by maintaining a candidate Support Vector set. It uses a greedy approach to pick points for inclusion in the candidate set. When the addition of a point to the candidate set is blocked because of other points already present in the set we use a backtracking approach to prune away such points. To speed up convergence we initialize our algorithm with the nearest pair of points from opposite classes. We then use an optimization based approach to increment or prune the candidate Support Vector set. The algorithm makes repeated passes over the data to satisfy the KKT constraints. The memory requirements of our algorithm scale as O(|S|) in the average case, where|S| is the size of the Support Vector set. We show that the algorithm is extremely competitive as compared to other conventional iterative algorithms like SMO and the NPA. We present results on a variety of real life datasets to validate our claims.", "title": "" }, { "docid": "65fe1d49a386f62d467b2796a270510c", "text": "The connection between human resources and performance in firms in the private sector is well documented. What is less clear is whether the move towards managerialism that has taken place within the Australian public sector during the last twenty years has brought with it some of the features of the relationships between Human Resource Management (HRM) and performance experienced within the private sector. The research begins with a review of the literature. In particular the conceptual thinking surrounding the connection between HRM and performance within private sector organisations is explored. Issues of concern are the direction of the relationship between HRM and performance and definitional questions as to the nature and level of HRM to be investigated and the measurement of performance. These conceptual issues are also debated within the context of a public sector and particularly the Australian environment. An outcome of this task is the specification of a set of appropriate parameters for a study of these linkages within Australian public sector organizations. Short Description The paper discusses the significance of strategic human resource management in relation to performance.", "title": "" }, { "docid": "b77c65cf9fe637fc88752f6776a21e36", "text": "This paper studies computer security from first principles. The basic questions \"Why?\", \"How do we know what we know?\" and \"What are the implications of what we believe?\"", "title": "" }, { "docid": "8305594d16f0565e3a62cbb69821c485", "text": "MOTIVATION\nAccurately predicting protein secondary structure and relative solvent accessibility is important for the study of protein evolution, structure and function and as a component of protein 3D structure prediction pipelines. Most predictors use a combination of machine learning and profiles, and thus must be retrained and assessed periodically as the number of available protein sequences and structures continues to grow.\n\n\nRESULTS\nWe present newly trained modular versions of the SSpro and ACCpro predictors of secondary structure and relative solvent accessibility together with their multi-class variants SSpro8 and ACCpro20. We introduce a sharp distinction between the use of sequence similarity alone, typically in the form of sequence profiles at the input level, and the additional use of sequence-based structural similarity, which uses similarity to sequences in the Protein Data Bank to infer annotations at the output level, and study their relative contributions to modern predictors. Using sequence similarity alone, SSpro's accuracy is between 79 and 80% (79% for ACCpro) and no other predictor seems to exceed 82%. However, when sequence-based structural similarity is added, the accuracy of SSpro rises to 92.9% (90% for ACCpro). Thus, by combining both approaches, these problems appear now to be essentially solved, as an accuracy of 100% cannot be expected for several well-known reasons. These results point also to several open technical challenges, including (i) achieving on the order of ≥ 80% accuracy, without using any similarity with known proteins and (ii) achieving on the order of ≥ 85% accuracy, using sequence similarity alone.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSSpro, SSpro8, ACCpro and ACCpro20 programs, data and web servers are available through the SCRATCH suite of protein structure predictors at http://scratch.proteomics.ics.uci.edu.", "title": "" }, { "docid": "3eb50289c3b28d2ce88052199d40bf8d", "text": "Transportation Problem is an important aspect which has been widely studied in Operations Research domain. It has been studied to simulate different real life problems. In particular, application of this Problem in NPHard Problems has a remarkable significance. In this Paper, we present a comparative study of Transportation Problem through Probabilistic and Fuzzy Uncertainties. Fuzzy Logic is a computational paradigm that generalizes classical two-valued logic for reasoning under uncertainty. In order to achieve this, the notation of membership in a set needs to become a matter of degree. By doing this we accomplish two things viz., (i) ease of describing human knowledge involving vague concepts and (ii) enhanced ability to develop cost-effective solution to real-world problem. The multi-valued nature of Fuzzy Sets allows handling uncertain and vague information. It is a model-less approach and a clever disguise of Probability Theory. We give comparative simulation results of both approaches and discuss the Computational Complexity. To the best of our knowledge, this is the first work on comparative study of Transportation Problem using Probabilistic and Fuzzy Uncertainties.", "title": "" } ]
scidocsrr
46ebfa26fb7981c876cf3c7a2cfae58d
Understanding Information
[ { "docid": "aa32bff910ce6c7b438dc709b28eefe3", "text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: m.batty@ucl.ac.uk 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science", "title": "" } ]
[ { "docid": "e59136e0d0a710643a078b58075bd8cd", "text": "PURPOSE\nEpidemiological evidence suggests that chronic consumption of fruit-based flavonoids is associated with cognitive benefits; however, the acute effects of flavonoid-rich (FR) drinks on cognitive function in the immediate postprandial period require examination. The objective was to investigate whether consumption of FR orange juice is associated with acute cognitive benefits over 6 h in healthy middle-aged adults.\n\n\nMETHODS\nMales aged 30-65 consumed a 240-ml FR orange juice (272 mg) and a calorie-matched placebo in a randomized, double-blind, counterbalanced order on 2 days separated by a 2-week washout. Cognitive function and subjective mood were assessed at baseline (prior to drink consumption) and 2 and 6 h post consumption. The cognitive battery included eight individual cognitive tests. A standardized breakfast was consumed prior to the baseline measures, and a standardized lunch was consumed 3 h post-drink consumption.\n\n\nRESULTS\nChange from baseline analysis revealed that performance on tests of executive function and psychomotor speed was significantly better following the FR drink compared to the placebo. The effects of objective cognitive function were supported by significant benefits for subjective alertness following the FR drink relative to the placebo.\n\n\nCONCLUSIONS\nThese data demonstrate that consumption of FR orange juice can acutely enhance objective and subjective cognition over the course of 6 h in healthy middle-aged adults.", "title": "" }, { "docid": "2690f802022b273d41b3131aa982b91b", "text": "Deep neural networks are demonstrating excellent performance on several classical vision problems. However, these networks are vulnerable to adversarial examples, minutely modified images that induce arbitrary attacker-chosen output from the network. We propose a mechanism to protect against these adversarial inputs based on a generative model of the data. We introduce a pre-processing step that projects on the range of a generative model using gradient descent before feeding an input into a classifier. We show that this step provides the classifier with robustness against first-order, substitute model, and combined adversarial attacks. Using a min-max formulation, we show that there may exist adversarial examples even in the range of the generator, natural-looking images extremely close to the decision boundary for which the classifier has unjustifiedly high confidence. We show that adversarial training on the generative manifold can be used to make a classifier that is robust to these attacks. Finally, we show how our method can be applied even without a pre-trained generative model using a recent method called the deep image prior. We evaluate our method on MNIST, CelebA and Imagenet and show robustness against the current state of the art attacks.", "title": "" }, { "docid": "1c5e17c7acff27e3b10aecf15c5809e7", "text": "Recent years witness a growing interest in nonstandard epistemic logics of “knowing whether”, “knowing what”, “knowing how” and so on. These logics are usually not normal, i.e., the standard axioms and reasoning rules for modal logic may be invalid. In this paper, we show that the conditional “knowing value” logic proposed by Wang and Fan [10] can be viewed as a disguised normal modal logic by treating the negation of Kv operator as a special diamond. Under this perspective, it turns out that the original first-order Kripke semantics can be greatly simplified by introducing a ternary relation R i in standard Kripke models which associates one world with two i-accessible worlds that do not agree on the value of constant c. Under intuitive constraints, the modal logic based on such Kripke models is exactly the one studied by Wang and Fan [10,11]. Moreover, there is a very natural binary generalization of the “knowing value” diamond, which, surprisingly, does not increase the expressive power of the logic. The resulting logic with the binary diamond has a transparent normal modal system which sharpens our understanding of the “knowing value” logic and simplifies some previous hard problems.", "title": "" }, { "docid": "0ee27f9045935db4241e9427bed2af59", "text": "As a new generation of deep-sea Autonomous Underwater Vehicle (AUV), Qianlong I is a 6000m rated glass deep-sea manganese nodules detection AUV which based on the CR01 and the CR02 deep-sea AUVs and developed by Shenyang Institute of Automation, the Chinese Academy of Sciences from 2010. The Qianlong I was tested in the thousand-isles lake in Zhejiang Province of China during November 2012 to March 2013 and the sea trials were conducted in the South China Sea during April 20-May 2, 2013 after the lake tests and the ocean application completed in October 2013. This paper describes two key problems encountered in the process of developing Qianlong I, including the launch and recovery systems development and variable buoyancy system development. Results from the recent lake and sea trails are presented, and future missions and development plans are discussed.", "title": "" }, { "docid": "98d1c35aeca5de703cec468b2625dc72", "text": "Congenital adrenal hyperplasia was described in London by Phillips (1887) who reported four cases of spurious hermaphroditism in one family. Fibiger (1905) noticed that there was enlargement of the adrenal glands in some infants who had died after prolonged vomiting and dehydration. Butler, Ross, and Talbot (1939) reported a case which showed serum electrolyte changes similar to those of Addison's disease. Further developments had to await the synthesis of cortisone. The work ofWilkins, Lewis, Klein, and Rosemberg (1950) showed that cortisone could alleviate the disorder and suppress androgen secretion. Bartter, Albright, Forbes, Leaf, Dempsey, and Carroll (1951) suggested that, in congenital adrenal hyperplasia, there might be a primary impairment of synthesis of cortisol (hydrocortisone, compound F) and a secondary rise of pituitary adrenocorticotrophin (ACTH) production. This was confirmed by Jailer, Louchart, and Cahill (1952) who showed that ACTH caused little increase in the output of cortisol in such cases. In the same year, Snydor, Kelley, Raile, Ely, and Sayers (1953) found an increased level ofACTH in the blood of affected patients. Studies of enzyme systems were carried out. Jailer, Gold, Vande Wiele, and Lieberman (1955) and Frantz, Holub, and Jailer (1960) produced evidence that the most common site for the biosynthetic block was in the C-21 hydroxylating system. Eberlein and Bongiovanni (1955) showed that there was a C-l 1 hydroxylation defect in patients with the hypertensive form of congenital adrenal hyperplasia, and Bongiovanni (1961) and Bongiovanni and Kellenbenz (1962), showed that in some patients there was a further type of enzyme defect, a 3-(-hydroxysteroid dehydrogenase deficiency, an enzyme which is required early in the metabolic pathway. Prader and Siebenmann (1957) described a female infant who had adrenal insufficiency and congenital lipoid hyperplasia of the", "title": "" }, { "docid": "2466ac1ce3d54436f74b5bb024f89662", "text": "In this paper we discuss our work on applying media theory to the creation of narrative augmented reality (AR) experiences. We summarize the concepts of remediation and media forms as they relate to our work, argue for their importance to the development of a new medium such as AR, and present two example AR experiences we have designed using these conceptual tools. In particular, we focus on leveraging the interaction between the physical and virtual world, remediating existing media (film, stage and interactive CD-ROM), and building on the cultural expectations of our users.", "title": "" }, { "docid": "bf03f941bcf921a44d0a34ec2161ee34", "text": "Epidermolytic ichthyosis (EI) is a rare autosomal dominant genodermatosis that presents at birth as a bullous disease, followed by a lifelong ichthyotic skin disorder. Essentially, it is a defective keratinization caused by mutations of keratin 1 (KRT1) or keratin 10 (KRT10) genes, which lead to skin fragility, blistering, and eventually hyperkeratosis. Successful management of EI in the newborn period can be achieved through a thoughtful, directed, and interdisciplinary or multidisciplinary approach that encompasses family support. This condition requires meticulous care to avoid associated morbidities such as infection and dehydration. A better understanding of the disrupted barrier protection of the skin in these patients provides a basis for management with daily bathing, liberal emollients, pain control, and proper nutrition as the mainstays of treatment. In addition, this case presentation will include discussions on the pathophysiology, complications, differential diagnosis, and psychosocial and ethical issues.", "title": "" }, { "docid": "b8b96789191e5afa48bea1d9e92443d5", "text": "Methionine, cysteine, homocysteine, and taurine are the 4 common sulfur-containing amino acids, but only the first 2 are incorporated into proteins. Sulfur belongs to the same group in the periodic table as oxygen but is much less electronegative. This difference accounts for some of the distinctive properties of the sulfur-containing amino acids. Methionine is the initiating amino acid in the synthesis of virtually all eukaryotic proteins; N-formylmethionine serves the same function in prokaryotes. Within proteins, many of the methionine residues are buried in the hydrophobic core, but some, which are exposed, are susceptible to oxidative damage. Cysteine, by virtue of its ability to form disulfide bonds, plays a crucial role in protein structure and in protein-folding pathways. Methionine metabolism begins with its activation to S-adenosylmethionine. This is a cofactor of extraordinary versatility, playing roles in methyl group transfer, 5'-deoxyadenosyl group transfer, polyamine synthesis, ethylene synthesis in plants, and many others. In animals, the great bulk of S-adenosylmethionine is used in methylation reactions. S-Adenosylhomocysteine, which is a product of these methyltransferases, gives rise to homocysteine. Homocysteine may be remethylated to methionine or converted to cysteine by the transsulfuration pathway. Methionine may also be metabolized by a transamination pathway. This pathway, which is significant only at high methionine concentrations, produces a number of toxic endproducts. Cysteine may be converted to such important products as glutathione and taurine. Taurine is present in many tissues at higher concentrations than any of the other amino acids. It is an essential nutrient for cats.", "title": "" }, { "docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db", "text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.", "title": "" }, { "docid": "372182b4ac2681ceedb9d78e9f38343d", "text": "A 12-bit 10-GS/s interleaved (IL) pipeline analog-to-digital converter (ADC) is described in this paper. The ADC achieves a signal to noise and distortion ratio (SNDR) of 55 dB and a spurious free dynamic range (SFDR) of 66 dB with a 4-GHz input signal, is fabricated in the 28-nm CMOS technology, and dissipates 2.9 W. Eight pipeline sub-ADCs are interleaved to achieve 10-GS/s sample rate, and mismatches between sub-ADCs are calibrated in the background. The pipeline sub-ADCs employ a variety of techniques to lower power, like avoiding a dedicated sample-and-hold amplifier (SHA-less), residue scaling, flash background calibration, dithering and inter-stage gain error background calibration. A push–pull input buffer optimized for high-frequency linearity drives the interleaved sub-ADCs to enable >7-GHz bandwidth. A fast turn-ON bootstrapped switch enables 100-ps sampling. The ADC also includes the ability to randomize the sub-ADC selection pattern to further reduce residual interleaving spurs.", "title": "" }, { "docid": "eb956188486caa595b7f38d262781af7", "text": "Due to the competitiveness of the computing industry, software developers are pressured to quickly deliver new code releases. At the same time, operators are expected to update and keep production systems stable at all times. To overcome the development–operations barrier, organizations have started to adopt Infrastructure as Code (IaC) tools to efficiently deploy middleware and applications using automation scripts. These automations comprise a series of steps that should be idempotent to guarantee repeatability and convergence. Rigorous testing is required to ensure that the system idempotently converges to a desired state, starting from arbitrary states. We propose and evaluate a model-based testing framework for IaC. An abstracted system model is utilized to derive state transition graphs, based on which we systematically generate test cases for the automation. The test cases are executed in light-weight virtual machine environments. Our prototype targets one popular IaC tool (Chef), but the approach is general. We apply our framework to a large base of public IaC scripts written by operators, showing that it correctly detects non-idempotent automations.", "title": "" }, { "docid": "b3790611437e1660b7c222adcb26b510", "text": "There have been increasing interests in the robotics community in building smaller and more agile autonomous micro aerial vehicles (MAVs). In particular, the monocular visual-inertial system (VINS) that consists of only a camera and an inertial measurement unit (IMU) forms a great minimum sensor suite due to its superior size, weight, and power (SWaP) characteristics. In this paper, we present a tightly-coupled nonlinear optimization-based monocular VINS estimator for autonomous rotorcraft MAVs. Our estimator allows the MAV to execute trajectories at 2 m/s with roll and pitch angles up to 30 degrees. We present extensive statistical analysis to verify the performance of our approach in different environments with varying flight speeds.", "title": "" }, { "docid": "7f61235bb8b77376936256dcf251ee0b", "text": "These practical guidelines for the biological treatment of personality disorders in primary care settings were developed by an international Task Force of the World Federation of Societies of Biological Psychiatry (WFSBP). They embody the results of a systematic review of all available clinical and scientific evidence pertaining to the biological treatment of three specific personality disorders, namely borderline, schizotypal and anxious/avoidant personality disorder in addition to some general recommendations for the whole field. The guidelines cover disease definition, classification, epidemiology, course and current knowledge on biological underpinnings, and provide a detailed overview on the state of the art of clinical management. They deal primarily with biological treatment (including antidepressants, neuroleptics, mood stabilizers and some further pharmacological agents) and discuss the relative significance of medication within the spectrum of treatment strategies that have been tested for patients with personality disorders, up to now. The recommendations should help the clinician to evaluate the efficacy spectrum of psychotropic drugs and therefore to select the drug best suited to the specific psychopathology of an individual patient diagnosed for a personality disorder.", "title": "" }, { "docid": "0122057f9fd813efd9f9e0db308fe8d9", "text": "Noun phrases in queries are identified and classified into four types: proper names, dictionary phrases, simple phrases and complex phrases. A document has a phrase if all content words in the phrase are within a window of a certain size. The window sizes for different types of phrases are different and are determined using a decision tree. Phrases are more important than individual terms. Consequently, documents in response to a query are ranked with matching phrases given a higher priority. We utilize WordNet to disambiguate word senses of query terms. Whenever the sense of a query term is determined, its synonyms, hyponyms, words from its definition and its compound words are considered for possible additions to the query. Experimental results show that our approach yields between 23% and 31% improvements over the best-known results on the TREC 9, 10 and 12 collections for short (title only) queries, without using Web data.", "title": "" }, { "docid": "5416e2a3f5a1855f19814eecec85092a", "text": "Code clones are exactly or nearly similar code fragments in the code-base of a software system. Existing studies show that clones are directly related to bugs and inconsistencies in the code-base. Code cloning (making code clones) is suspected to be responsible for replicating bugs in the code fragments. However, there is no study on the possibilities of bug-replication through cloning process. Such a study can help us discover ways of minimizing bug-replication. Focusing on this we conduct an empirical study on the intensities of bug-replication in the code clones of the major clone-types: Type 1, Type 2, and Type 3. According to our investigation on thousands of revisions of six diverse subject systems written in two different programming languages, C and Java, a considerable proportion (i.e., up to 10%) of the code clones can contain replicated bugs. Both Type 2 and Type 3 clones have higher tendencies of having replicated bugs compared to Type 1 clones. Thus, Type 2 and Type 3 clones are more important from clone management perspectives. The extent of bug-replication in the buggy clone classes is generally very high (i.e., 100% in most of the cases). We also find that overall 55% of all the bugs experienced by the code clones can be replicated bugs. Our study shows that replication of bugs through cloning is a common phenomenon. Clone fragments having method-calls and if-conditions should be considered for refactoring with high priorities, because such clone fragments have high possibilities of containing replicated bugs. We believe that our findings are important for better maintenance of software systems, in particular, systems with code clones.", "title": "" }, { "docid": "ea95f4475bb65f7ea0f270387919df47", "text": "The field of supramolecular chemistry focuses on the non-covalent interactions between molecules that give rise to molecular recognition and self-assembly processes. Since most non-covalent interactions are relatively weak and form and break without significant activation barriers, many supramolecular systems are under thermodynamic control. Hence, traditionally, supramolecular chemistry has focused predominantly on systems at equilibrium. However, more recently, self-assembly processes that are governed by kinetics, where the outcome of the assembly process is dictated by the assembly pathway rather than the free energy of the final assembled state, are becoming topical. Within the kinetic regime it is possible to distinguish between systems that reside in a kinetic trap and systems that are far from equilibrium and require a continuous supply of energy to maintain a stationary state. In particular, the latter systems have vast functional potential, as they allow, in principle, for more elaborate structural and functional diversity of self-assembled systems - indeed, life is a prime example of a far-from-equilibrium system. In this Review, we compare the different thermodynamic regimes using some selected examples and discuss some of the challenges that need to be addressed when developing new functional supramolecular systems.", "title": "" }, { "docid": "4d87a5793186fc1dcaa51abcc06135a7", "text": "PURPOSE OF REVIEW\nArboviruses have been associated with central and peripheral nervous system injuries, in special the flaviviruses. Guillain-Barré syndrome (GBS), transverse myelitis, meningoencephalitis, ophthalmological manifestations, and other neurological complications have been recently associated to Zika virus (ZIKV) infection. In this review, we aim to analyze the epidemiological aspects, possible pathophysiology, and what we have learned about the clinical and laboratory findings, as well as treatment of patients with ZIKV-associated neurological complications.\n\n\nRECENT FINDINGS\nIn the last decades, case series have suggested a possible link between flaviviruses and development of GBS. Recently, large outbreaks of ZIKV infection in Asia and the Americas have led to an increased incidence of GBS in these territories. Rapidly, several case reports and case series have reported an increase of all clinical forms and electrophysiological patterns of GBS, also including cases with associated central nervous system involvement. Finally, cases suggestive of acute transient polyneuritis, as well as acute and progressive postinfectious neuropathies associated to ZIKV infection have been reported, questioning the usually implicated mechanisms of neuronal injury.\n\n\nSUMMARY\nThe recent ZIKV outbreaks have triggered the occurrence of a myriad of neurological manifestations likely associated to this arbovirosis, in special GBS and its variants.", "title": "" }, { "docid": "f312bfe7f80fdf406af29bfde635fa36", "text": "In two studies, a newly devised test (framed-line test) was used to examine the hypothesis that individuals engaging in Asian cultures are more capable of incorporating contextual information and those engaging in North American cultures are more capable of ignoring contextual information. On each trial, participants were presented with a square frame, within which was printed a vertical line. Participants were then shown another square frame of the same or different size and asked to draw a line that was identical to the first line in either absolute length (absolute task) or proportion to the height of the surrounding frame (relative task). The results supported the hypothesis: Whereas Japanese were more accurate in the relative task, Americans were more accurate in the absolute task. Moreover, when engaging in another culture, individuals tended to show the cognitive characteristic common in the host culture.", "title": "" }, { "docid": "b213afb537bbc4c476c760bb8e8f2997", "text": "Recommender system has been demonstrated as one of the most useful tools to assist users' decision makings. Several recommendation algorithms have been developed and implemented by both commercial and open-source recommendation libraries. Context-aware recommender system (CARS) emerged as a novel research direction during the past decade and many contextual recommendation algorithms have been proposed. Unfortunately, no recommendation engines start to embed those algorithms in their kits, due to the special characteristics of the data format and processing methods in the domain of CARS. This paper introduces an open-source Java-based context-aware recommendation engine named as CARSKit which is recognized as the 1st open source recommendation library specifically designed for CARS. It implements the state-of-the-art context-aware recommendation algorithms, and we will showcase the ease with which CARSKit allows recommenders to be configured and evaluated in this demo.", "title": "" }, { "docid": "101c03b85e3cc8518a158d89cc9b3b39", "text": "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.", "title": "" } ]
scidocsrr
41d09465649a1053723548236adc1f7c
Detection and Analysis of Irregular Streaks in Dermoscopic Images of Skin Lesions
[ { "docid": "dbb5081b819938a3a8d6003576874d10", "text": "The importance of recognizing early melanoma is generally accepted. Because not all pigmented skin lesions can be diagnosed correctly by their clinical appearance, additional criteria are required for the clinical diagnosis of such lesions. In vivo epiluminescence microscopy provides for a more detailed inspection of the surface of pigmented skin lesions, and, by using the oil immersion technic, which renders the epidermis translucent, opens a new dimension of skin morphology by including the dermoepidermal junction into the macroscopic evaluation of a lesion. In an epiluminescence microscopy study of more than 3000 pigmented skin lesions we have defined morphologic criteria that are not readily apparent to the naked eye but that are detected easily by epiluminescence microscopy and represent relatively reliable markers of benign and malignant pigmented skin lesions. These features include specific patterns, colors, and intensities of pigmentation, as well as the configuration, regularity, and other characteristics of both the margin and the surface of pigmented skin lesions. Pattern analysis of these features permits a distinction between different types of pigmented skin lesions and, in particular, between benign and malignant growth patterns. Epiluminescence microscopy is thus a valuable addition to the diagnostic armamentarium of pigmented skin lesions at a clinical level.", "title": "" } ]
[ { "docid": "071136d78ce8e3001e4b1bb47dc43d48", "text": "Graphene-enabled wireless communications constitute a novel paradigm which has been proposed to implement wireless communications among nanosystems. Indeed, graphene-based plasmonic nano-antennas, or graphennas, just a few micrometers in size have been predicted to radiate electromagnetic waves at the terahertz band. In this work, the important role of the graphene conductivity in the characteristics of graphennas is analyzed, and their radiation performance both in transmission and reception is numerically studied. The resonance frequency of graphennas is calculated as a function of their length and width, both analytically and by simulation. Moreover, the influence of a dielectric substrate with a variable size, and the position of the patch with respect to the substrate is also evaluated. Further, important properties of graphene, such as its chemical potential or its relaxation time, are found to have a profound impact in the radiation properties of graphennas. Finally, the radiation pattern of a graphenna is compared to that of an equivalent metallic antenna. These results will prove useful for designers of future graphennas, which are expected to enable wireless communications", "title": "" }, { "docid": "709efaca57b9eef28e9a58eb4c4c5ace", "text": "BACKGROUND\nThe increasing use of zebrafish model has not been accompanied by the evolution of proper anaesthesia for this species in research. The most used anaesthetic in fishes, MS222, may induce aversion, reduction of heart rate, and consequently high mortality, especially during long exposures. Therefore, we aim to explore new anaesthetic protocols to be used in zebrafish by studying the quality of anaesthesia and recovery induced by different concentrations of propofol alone and in combination with different concentrations of lidocaine.\n\n\nMATERIAL AND METHODS\nIn experiment A, eighty-three AB zebrafish were randomly assigned to 7 different groups: control, 2.5 (2.5P), 5 (5P) or 7.5 μg/ml (7.5P) of propofol; and 2.5 μg/ml of propofol combined with 50, (P/50L), 100 (P/100L) or 150 μg/ml (P/150L) of lidocaine. Zebrafish were placed in an anaesthetic water bath and time to lose the equilibrium, reflex to touch, reflex to a tail pinch, and respiratory rate were measured. Time to gain equilibrium was also assessed in a clean tank. Five and 24 hours after anaesthesia recovery, zebrafish were evaluated concerning activity and reactivity. Afterwards, in a second phase of experiments (experiment B), the best protocol of the experiment A was compared with a new group of 8 fishes treated with 100 mg/L of MS222 (100M).\n\n\nRESULTS\nIn experiment A, only different concentrations of propofol/lidocaine combination induced full anaesthesia in all animals. Thus only these groups were compared with a standard dose of MS222 in experiment B. Propofol/lidocaine induced a quicker loss of equilibrium, and loss of response to light and painful stimuli compared with MS222. However zebrafish treated with MS222 recovered quickly than the ones treated with propofol/lidocaine.\n\n\nCONCLUSION\nIn conclusion, propofol/lidocaine combination and MS222 have advantages in different situations. MS222 is ideal for minor procedures when a quick recovery is important, while propofol/lidocaine is best to induce a quick and complete anaesthesia.", "title": "" }, { "docid": "27693b8404d3fb84d880f4ff762a848a", "text": "Cloud computing is an emerging, revenue generating and internet based technology, where computing power, storage and other resources are provided to the stakeholders in a single package. The traditional online banking systems can make use of the cloud framework for providing economical and high-speed online service to the consumers. This paper first describes a systematic Multi-factor bio-metric Fingerprint Authentication (MFA) approach which provides a high-secure identity verification process for validating the legitimacy of the remote users. The significance of this approach is that the authentication credentials of the users are not revealed to the bank and cloud authentication servers, but allows the servers to perform remote users’ authentication. We then extend this investigated framework to develop a privacy protection gateway for obscuring and desensitizing the customers’ account details using tokenization and data anonymization techniques. This approach retains the original format of data fields at various levels of the database management systems and makes the data worthless to others except the owner. In addition to designing an efficient MFA, through extensive experimental results we illustrate our privacy protection gateway is practical and effective.", "title": "" }, { "docid": "97a38b97186062d403f01f449da2e807", "text": "Creating the first World Championclass chess computer belongs among the oldest challenges in computer science. When World Chess Champion Garry Kasparov resigned the last game of a six-game match against IBM’s Deep Blue supercomputer on 11 May 1997, his loss marked achievement of this goal. The quest for a “chess machine” dates back to 1769 when the Turk—with a human player hidden inside—debuted in the Austrian court. The arrival of electronic computers in the late 1940s spurred new research interest in chess programs. Early programs emphasized the emulation of the human chess thought process. The Chess 4.5 program in the late 1970s first demonstrated that an engineering approach emphasizing hardware speed might be more fruitful. Belle, a specialpurpose hardwired chess machine from Bell Laboratories, became the first national master program in the early 1980s. Following the same trend, Cray Blitz running on a Cray supercomputer, and Hitech, another specialpurpose chess machine, became the top programs in the mid-1980s. For the next 10 years or so, chess machines based on a move generator of my design— ChipTest (1986-1987), Deep Thought (1988-1991), and Deep Thought II (19921995)—claimed spots as the top chess programs in the world. In 1988 the Deep Thought team won the second Fredkin Intermediate Prize for Grandmaster-level performance for Deep Thought’s 2650-plus rating on the USCF’s scale over 25 consecutive games. Deep Blue’s 1996 debut in the first Kasparov versus Deep Blue match in Philadelphia finally eclipsed Deep Thought II. The 1996 version of Deep Blue used a new chess chip designed at IBM Research over the course of three years. A major revision of this chip participated in the historic 1997 rematch between Kasparov and Deep Blue. This article concentrates mainly on the revised chip.", "title": "" }, { "docid": "ae1a909210c44fcdee61a14b70ab0bb7", "text": "Users of social media sites can use more than one account. These identities have pseudo anonymous properties, and as such some users abuse multiple accounts to perform undesirable actions, such as posting false or misleading remarks comments that praise or defame the work of others. The detection of multiple user accounts that are controlled by an individual or organization is important. Herein, we define the problem as sockpuppet gang (SPG) detection. First, we analyze user sentiment orientation to topics based on emotional phrases extracted from their posted comments. Then we evaluate the similarity between sentiment orientations of user account pairs, and build a similar-orientation network (SON) where each vertex represents a user account on a social media site. In an SON, an edge exists only if the two user accounts have similar sentiment orientations to most topics. The boundary between detected SPGs may be indistinct, thus by analyzing account posting behavior features we propose a multiple random walk method to iteratively remeasure the weight of each edge. Finally, we adopt multiple community detection algorithms to detect SPGs in the network. User accounts in the same SPG are considered to be controlled by the same individual or organization. In our experiments on real world datasets, our method shows better performance than other contemporary methods.", "title": "" }, { "docid": "d1e9eb1357381310c4540a6dcbe8973a", "text": "We introduce a method for learning Bayesian networks that handles the discretization of continuous variables as an integral part of the learning process. The main ingredient in this method is a new metric based on the Minimal Description Length principle for choosing the threshold values for the discretization while learning the Bayesian network structure. This score balances the complexity of the learned discretization and the learned network structure against how well they model the training data. This ensures that the discretization of each variable introduces just enough intervals to capture its interaction with adjacent variables in the network. We formally derive the new metric, study its main properties, and propose an iterative algorithm for learning a discretization policy. Finally, we illustrate its behavior in applications to supervised learning.", "title": "" }, { "docid": "89bcf5b0af2f8bf6121e28d36ca78e95", "text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4", "title": "" }, { "docid": "97ee8fc3c9ed2fffe54003df0a350e77", "text": "Universit g.2013.0 Abstract Consanguineous marriages have been practiced since the early existence of modern humans. Until now, consanguinity is widely practiced in several global communities with variable rates. The present study was undertaken to analyze the effect of consanguinity on different types of genetic diseases and child morbidity and mortality. Patients were grouped according to the types of genetic errors into four groups: Group I: Chromosomal and microdeletion syndromes. Group II: Single gene disorders. Group III: Multifactorial disorders. Group IV: Diseases of different etiologies. Consanguineous marriage was highly significant in 54.4% of the studied group compared to 35.3% in the control group (P < 0.05). Consanguineous marriages were represented in 31.4%, 7.1%, 0.8%, 6%, 9.1% among first cousins, one and a half cousins, double first cousins, second cousins and remote relatives respectively in the studied group. Comparison between genetic diseases with different modes of inheritance showed that recessive and multifactorial disorders had the highest values of consanguinity (78.8%, 69.8%, respectively), while chromosomal disorders had the lowest one (29.1%). Consanguineous marriage was recorded in 51.5% of our cases with autosomal dominant diseases and in 31% of cases with X linked diseases, all cases of mental retardation (100%) and in 92.6% of patients with limb anomalies (P < 0.001). Stillbirths, child deaths and recurrent abortions were significantly increased among consanguineous parents (80.6%, 80%, 67%) respectively than among non consanguineous parents. In conclusion, consanguineous marriage is significantly higher in many genetic diseases which suggests that couples may have deleterious lethal genes, inherited from common ancestor and when transmitted to their offsprings, they can lead to prenatal, neonatal, child morbidity or mortality. So public health education and genetic counseling are highly recommended in our community. 2013 Ain Shams University. Production and hosting by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e468fd0e6c14fee379cd1825afd018eb", "text": "Bionic implants for the deaf require wide-dynamicrange low-power microphone preamplifiers with good wide-band rejection of the supply noise. Widely used low-cost implementations of such preamplifiers typically use the buffered voltage output of an electret capacitor with a built-in JFET source follower. We describe a design in which the JFET microphone buffer’s output current, rather than its output voltage, is transduced via a sense-amplifier topology allowing good in-band power-supply rejection. The design employs a low-frequency feedback loop to subtract the dc bias current of the microphone and prevent it from causing saturation. Wide-band power-supply rejection is achieved by integrating a novel filter on all current-source biasing. Our design exhibits 80 dB of dynamic range with less than 5 Vrms of input noise while operating from a 2.8 V supply. The power consumption is 96 W which includes 60 W for the microphone built-in buffer. The in-band power-supply rejection ratio varies from 50 to 90 dB while out-of-band supply attenuation is greater than 60 dB until 25 MHz. Fabrication was done in a 1.5m CMOS process with gain programmability for both microphone and auxiliary channel inputs.", "title": "" }, { "docid": "7f7c371d5b0c315fcc89603772a060fd", "text": "While basic Web analytics tools are widespread and provide statistics about Web site navigation, no approaches exist for merging such statistics with information about the Web application structure, content and semantics. We demonstrate the advantages of combining Web application models with runtime navigation logs, at the purpose of deepening the understanding of users behaviour. We propose a model-driven approach that combines user interaction modeling (based on the IFML standard), full code generation of the designed application, user tracking at runtime through logging of runtime component execution and user activities, integration with page content details, generation of integrated schemaless data streams, and application of large-scale analytics and visualization tools for big data, by applying both traditional data visualization techniques and direct representation of statistics on visual models of the Web application.", "title": "" }, { "docid": "eae333084b4e7a424e056fd0d55f1add", "text": "Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Haze removal from a single image of a weather-degraded scene remains a challenging task, because the haze is dependent on the unknown depth information. In this paper, we introduce an improved single image dehazing algorithm which based on the atmospheric scattering physics-based models. We apply the local dark channel prior on selected region to estimate the atmospheric light, and obtain more accurate result. Experiments on real images validate our approach.", "title": "" }, { "docid": "8e4bd52e3b10ea019241679541c25c9d", "text": "Accurate project effort prediction is an important goal for the software engineering community. To date most work has focused upon building algorithmic models of effort, for example COCOMO. These can be calibrated to local environments. We describe an alternative approach to estimation based upon the use of analogies. The underlying principle is to characterize projects in terms of features (for example, the number of interfaces, the development method or the size of the functional requirements document). Completed projects are stored and then the problem becomes one of finding the most similar projects to the one for which a prediction is required. Similarity is defined as Euclidean distance in n-dimensional space where n is the number of project features. Each dimension is standardized so all dimensions have equal weight. The known effort values of the nearest neighbors to the new project are then used as the basis for the prediction. The process is automated using a PC-based tool known as ANGEL. The method is validated on nine different industrial datasets (a total of 275 projects) and in all cases analogy outperforms algorithmic models based upon stepwise regression. From this work we argue that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques.", "title": "" }, { "docid": "4306cc9072c5b53f6fc7b79574dac117", "text": "It is popular to use real-world data to evaluate data mining techniques. However, there are some disadvantages to use real-world data for such purposes. Firstly, real-world data in most domains is difficult to obtain for several reasons, such as budget, technical or ethical. Secondly, the use of many of the real-world data is restricted, those data sets do either not contain specific patterns that are easy to mine or the data needs special preparation and the algorithm needs very specific settings in order to find patterns in it. The solution to this could be the generation of synthetic, \"meaningful data\" (data with intrinsic patterns). This paper presents a novel approach for generating synthetic data by developing a tool, including novel algorithms for specific data mining patterns, and a user-friendly interface, which is able to create large data sets with predefined classification rules, multilinear regression patterns. A preliminary run of the prototype proves that the generation of large amounts of such \"meaningful data\" is possible. Also the proposed approach could be extended to a further development for generating synthetic data with other intrinsic patterns.", "title": "" }, { "docid": "0ad4432a79ea6b3eefbe940adf55ff7b", "text": "This study reviews the long-term outcome of prostheses and fixtures (implants) in 759 totally edentulous jaws of 700 patients. A total of 4,636 standard fixtures were placed and followed according to the osseointegration method for a maximum of 24 years by the original team at the University of Göteborg. Standardized annual clinical and radiographic examinations were conducted as far as possible. A lifetable approach was applied for statistical analysis. Sufficient numbers of fixtures and prostheses for a detailed statistical analysis were present for observation times up to 15 years. More than 95% of maxillae had continuous prosthesis stability at 5 and 10 years, and at least 92% at 15 years. The figure for mandibles was 99% at all time intervals. Calculated from the time of fixture placement, the estimated survival rates for individual fixtures in the maxilla were 84%, 89%, and 92% at 5 years; 81% and 82% at 10 years; and 78% at 15 years. In the mandible they were 91%, 98%, and 99% at 5 years; 89% and 98% at 10 years; and 86% at 15 years. (The different percentages at 5 and 10 years refer to results for different routine groups of fixtures with 5 to 10, 10 to 15, and 1 to 5 years of observation time, respectively.) The results of this study concur with multicenter and earlier results for the osseointegration method.", "title": "" }, { "docid": "6de3aca18d6c68f0250c8090ee042a4e", "text": "JavaScript is widely used by web developers and the complexity of JavaScript programs has increased over the last year. Therefore, the need for program analysis for JavaScript is evident. Points-to analysis for JavaScript is to determine the set of objects to which a reference variable or an object property may point. Points-to analysis for JavaScript is a basis for further program analyses for JavaScript. It has a wide range of applications in code optimization and software engineering tools. However, points-to analysis for JavaScript has not yet been developed.\n JavaScript has dynamic features such as the runtime modification of objects through addition of properties or updating of methods. We propose a points-to analysis for JavaScript which precisely handles the dynamic features of JavaScript. Our work is the first attempt to analyze the points-to behavior of JavaScript. We evaluate the analysis on a set of JavaScript programs. We also apply the analysis to a code optimization technique to show that the analysis can be practically useful.", "title": "" }, { "docid": "03977b7bdc0102caf7033012354aa897", "text": "One of the important issues in service organizations is to identify the customers, understanding their difference and ranking them. Recently, the customer value as a quantitative parameter has been used for segmenting customers. A practical solution for analytical development is using analytical techniques such as dynamic clustering algorithms and programs to explore the dynamics in consumer preferences. The aim of this research is to understand the current customer behavior and suggest a suitable policy for new customers in order to attain the highest benefits and customer satisfaction. To identify such market in life insurance customers, We have used the FKM.pf.niose fuzzy clustering technique for classifying the customers based on their demographic and behavioral data of 1071 people in the period April to October 2014. Results show the optimal number of clusters is 3. These three clusters can be named as: investment, security of life and a combination of both. Some suggestions are presented to improve the performance of the insurance company.", "title": "" }, { "docid": "a97f7ed65c4ba37bbda5e0af9abec425", "text": "Two novel ternary CNTFET-based SRAM cells are proposed in this paper. The first proposed CNTFET SRAM uses additional CNTFETs to sink the bit lines to ground; its operation is nearly independent of the ternary values. The second cell utilizes the traditional voltage controller (or supply) of a binary SRAM in a ternary SRAM; it consists of adding two CNTFETs to the first proposed cell. CNTFET features (such as sizing and density) and performance metrics (such as SNM and PDP) and write/read times are considered and assessed in detail. The impact of different features (such as chirality and CNT density) is also analyzed with respect to the operations of the memory cells. The effects of different process variations (such as lithography and density/number of CNTs) are extensively evaluated with respect to performance metrics. In nearly all cases, the proposed cells outperform existing CNTFET-based cells by showing a small standard deviation in the simulated memory circuits. & 2016 Published by Elsevier B.V. 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97", "title": "" }, { "docid": "ba457819a7375c5dfee9ab870c56cc55", "text": "A biometric system is vulnerable to a variety of attacks aimed at undermining the integrity of the authentication process. These attacks are intended to either circumvent the security afforded by the system or to deter the normal functioning of the system. We describe the various threats that can be encountered by a biometric system. We specifically focus on attacks designed to elicit information about the original biometric data of an individual from the stored template. A few algorithms presented in the literature are discussed in this regard. We also examine techniques that can be used to deter or detect these attacks. Furthermore, we provide experimental results pertaining to a hybrid system combining biometrics with cryptography, that converts traditional fingerprint templates into novel cryptographic structures.", "title": "" }, { "docid": "c973dc425e0af0f5253b71ae4ebd40f9", "text": "A growing body of research on Bitcoin and other permissionless cryptocurrencies that utilize Nakamoto’s blockchain has shown that they do not easily scale to process a high throughput of transactions, or to quickly approve individual transactions; blocks must be kept small, and their creation rates must be kept low in order to allow nodes to reach consensus securely. As of today, Bitcoin processes a mere 3-7 transactions per second, and transaction confirmation takes at least several minutes. We present SPECTRE, a new protocol for the consensus core of crypto-currencies that remains secure even under high throughput and fast confirmation times. At any throughput, SPECTRE is resilient to attackers with up to 50% of the computational power (up until the limit defined by network congestion and bandwidth constraints). SPECTRE can operate at high block creation rates, which implies that its transactions confirm in mere seconds (limited mostly by the round-trip-time in the network). Key to SPECTRE’s achievements is the fact that it satisfies weaker properties than classic consensus requires. In the conventional paradigm, the order between any two transactions must be decided and agreed upon by all non-corrupt nodes. In contrast, SPECTRE only satisfies this with respect to transactions performed by honest users. We observe that in the context of money, two conflicting payments that are published concurrently could only have been created by a dishonest user, hence we can afford to delay the acceptance of such transactions without harming the usability of the system. Our framework formalizes this weaker set of requirements for a crypto-currency’s distributed ledger. We then provide a formal proof that SPECTRE satisfies these requirements.", "title": "" }, { "docid": "9f469cdc1864aad2026630a29c210c1f", "text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.", "title": "" } ]
scidocsrr
d3516e87c5db3ab802e14d7b9a273fe6
Bayesian Policy Gradients via Alpha Divergence Dropout Inference
[ { "docid": "ba1368e4acc52395a8e9c5d479d4fe8f", "text": "This talk will present an overview of our recent research on distributional reinforcement learning. Our starting point is our recent ICML paper, in which we argued for the fundamental importance of the value distribution: the distribution of random returns received by a reinforcement learning agent. This is in contrast to the common approach, which models the expectation of this return, or value. Back then, we were able to design a new algorithm that learns the value distribution through a TD-like bootstrap process and achieved state-of-the-art performance on games from the Arcade Learning Environment (ALE). However, this left open the question as to why the distributional approach should perform better at all. We’ve since delved deeper into what makes distributional RL work: first by improving the original using quantile regression, which directly minimizes the Wasserstein metric; and second by unearthing surprising connections between the original C51 algorithm and the distant cousin of the Wasserstein metric, the Cramer distance.", "title": "" }, { "docid": "4fc6ac1b376c965d824b9f8eb52c4b50", "text": "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.", "title": "" }, { "docid": "16915e2da37f8cd6fa1ce3a4506223ff", "text": "In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.", "title": "" } ]
[ { "docid": "83a968fcd2d77de796a8161b6dead9bc", "text": "We introduce a deep learning-based method to generate full 3D hair geometry from an unconstrained image. Our method can recover local strand details and has real-time performance. State-of-the-art hair modeling techniques rely on large hairstyle collections for nearest neighbor retrieval and then perform ad-hoc refinement. Our deep learning approach, in contrast, is highly efficient in storage and can run 1000 times faster while generating hair with 30K strands. The convolutional neural network takes the 2D orientation field of a hair image as input and generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible hairstyles, and the visibility of each strand is also used as a weight term to improve the reconstruction accuracy. The encoder-decoder architecture of our network naturally provides a compact and continuous representation for hairstyles, which allows us to interpolate naturally between hairstyles. We use a large set of rendered synthetic hair models to train our network. Our method scales to real images because an intermediate 2D orientation field, automatically calculated from the real image, factors out the difference between synthetic and real hairs. We demonstrate the effectiveness and robustness of our method on a wide range of challenging real Internet pictures, and show reconstructed hair sequences from videos.", "title": "" }, { "docid": "b1138def2e8c5206eecc9cefa5a7c901", "text": "Soft robots have recently demonstrated impressive abilities to adapt to objects and their environment with limited sensing and actuation. However, mobile soft robots are typically fabricated using laborious molding processes that result in limited actuated degrees of freedom and hence limited locomotion capabilities. In this paper, we present a 3D printed robot with bellowed soft legs capable of rotation about two axes. This allows our robot to navigate rough terrain that previously posed a significant challenge to soft robots. We present models and FEM simulations for the soft leg modules and predict the robot locomotion capabilities. We use finite element analysis to simulate the actuation characteristics of these modules. We then compared the analytical and computational results to experimental results with a tethered prototype. The experimental soft robot is capable of lifting its legs 5.3 cm off the ground and is able to walk at speeds up to 20 mm/s (0.13 bl/s). This work represents a practical approach to the design and fabrication of functional mobile soft robots.", "title": "" }, { "docid": "08b2b3539a1b10f7423484946121ed50", "text": "BACKGROUND\nCatheter ablation of persistent atrial fibrillation yields an unsatisfactorily high number of failures. The hybrid approach has recently emerged as a technique that overcomes the limitations of both surgical and catheter procedures alone.\n\n\nMETHODS AND RESULTS\nWe investigated the sequential (staged) hybrid method, which consists of a surgical thoracoscopic radiofrequency ablation procedure followed by radiofrequency catheter ablation 6 to 8 weeks later using the CARTO 3 mapping system. Fifty consecutive patients (mean age 62±7 years, 32 males) with long-standing persistent atrial fibrillation (41±34 months) and a dilated left atrium (>45 mm) were included and prospectively followed in an unblinded registry. During the electrophysiological part of the study, all 4 pulmonary veins were found to be isolated in 36 (72%) patients and a complete box-lesion was confirmed in 14 (28%) patients. All gaps were successfully re-ablated. Twelve months after the completed hybrid ablation, 47 patients (94%) were in normal sinus rhythm (4 patients with paroxysmal atrial fibrillation required propafenone and 1 patient underwent a redo catheter procedure). The majority of arrhythmias recurred during the first 3 months. Beyond 12 months, there were no arrhythmia recurrences detected. The surgical part of the procedure was complicated by 7 (13.7%) major complications, while no serious adverse events were recorded during the radiofrequency catheter part of the procedure.\n\n\nCONCLUSIONS\nThe staged hybrid epicardial-endocardial treatment of long-standing persistent atrial fibrillation seems to be extremely effective in maintenance of normal sinus rhythm compared to radiofrequency catheter or surgical ablation alone. Epicardial ablation alone cannot guarantee durable transmural lesions.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: www.ablace.cz Unique identifier: cz-060520121617.", "title": "" }, { "docid": "0c9bfe01ed4e0f35ff30041db17b6487", "text": "We demonstrate a system for tracking and analyzing moods of bloggers worldwide, as reflected in the largest blogging community, LiveJournal. Our system collects thousands of blog posts every hour, performs various analyses on the posts and presents the results graphically. Exploring the Blogspace From the point of view of information access, the blogspace offers many natural opportunities beyond traditional search facilities, such as trend detection, topic tracking, link tracking, feed generation, etc. But there is more. Many blog authoring environments allow bloggers to tag their entries with highly individual (and personal) features. Users of LiveJournal, currently the largest weblog community, have the option of reporting theirmoodat the time of the post; users can either select a mood from a predefined list of 132 common moods such as “amused” or “angry,” or enter free-text. A large percentage of LiveJournal users chooses to utilize this option, tagging their postings with a mood. This results in a stream of hundreds of weblog posts tagged with mood information per minute, from hundreds of thousands of different users across the globe. Our focus in this demo is on providing access to the blogspace using moods as the “central” dimension. The type of information needs that we are interested in are best illustrated by questions such as: How do moods develop? How are they related? How do global events impact moods? And: Can global mood swings be traced back to global events? We describe MoodViews, a collection of tools for analyzing, tracking and visualizing moods and mood changes in blogs posted by LiveJournal users.", "title": "" }, { "docid": "945b2067076bd47485b39c33fb062ec1", "text": "Computation of floating-point transcendental functions has a relevant importance in a wide variety of scientific applications, where the area cost, error and latency are important requirements to be attended. This paper describes a flexible FPGA implementation of a parameterizable floating-point library for computing sine, cosine, arctangent and exponential functions using the CORDIC algorithm. The novelty of the proposed architecture is that by sharing the same resources the CORDIC algorithm can be used in two operation modes, allowing it to compute the sine, cosine or arctangent functions. Additionally, in case of the exponential function, the architectures change automatically between the CORDIC or a Taylor approach, which helps to improve the precision characteristics of the circuit, specifically for small input values after the argument reduction. Synthesis of the circuits and an experimental analysis of the errors have demonstrated the correctness and effectiveness of the implemented cores and allow the designer to choose, for general-purpose applications, a suitable bit-width representation and number of iterations of the CORDIC algorithm.", "title": "" }, { "docid": "352b850c526fd562c5d0c43dfea533f5", "text": "Social network has lately shown an important impact in both scientific and social societies and is considered a highly weighted source of information nowadays. Due to its noticeable significance, several research movements were introduced in this domain including: Location-Based Social Networks (LBSN), Recommendation Systems, Sentiment Analysis Applications, and many others. Location Based Recommendation systems are among the highly required applications for predicting human mobility based on users' social ties as well as their spatial preferences. In this paper we introduce a trust based recommendation algorithm that addresses the problem of recommending locations based on both users' interests as well as social trust among users. In our study we use two real LBSN, Gowalla and Brightkite that include the social relationships among users as well as data about their visited locations. Experiments showing the performance of the proposed trust based recommendation algorithm are also presented.", "title": "" }, { "docid": "5bef975924d427c3ae186d92a93d4f74", "text": "The Voronoi diagram of a set of sites partitions space into regions, one per site; the region for a site s consists of all points closer to s than to any other site. The dual of the Voronoi diagram, the Delaunay triangulation, is the unique triangulation such that the circumsphere of every simplex contains no sites in its interior. Voronoi diagrams and Delaunay triangulations have been rediscovered or applied in many areas of mathematics and the natural sciences; they are central topics in computational geometry, with hundreds of papers discussing algorithms and extensions. Section 27.1 discusses the definition and basic properties in the usual case of point sites in R with the Euclidean metric, while Section 27.2 gives basic algorithms. Some of the many extensions obtained by varying metric, sites, environment, and constraints are discussed in Section 27.3. Section 27.4 finishes with some interesting and nonobvious structural properties of Voronoi diagrams and Delaunay triangulations.", "title": "" }, { "docid": "19f3720d0077783554b6d9cd71e95c48", "text": "Radical prostatectomy is performed on approximately 40% of men with organ-confined prostate cancer. Pathologic information obtained from the prostatectomy specimen provides important prognostic information and guides recommendations for adjuvant treatment. The current pathology protocol in most centers involves primarily qualitative assessment. In this paper, we describe and evaluate our system for automatic prostate cancer detection and grading on hematoxylin & eosin-stained tissue images. Our approach is intended to address the dual challenges of large data size and the need for high-level tissue information about the locations and grades of tumors. Our system uses two stages of AdaBoost-based classification. The first provides high-level tissue component labeling of a superpixel image partitioning. The second uses the tissue component labeling to provide a classification of cancer versus noncancer, and low-grade versus high-grade cancer. We evaluated our system using 991 sub-images extracted from digital pathology images of 50 whole-mount tissue sections from 15 prostatectomy patients. We measured accuracies of 90% and 85% for the cancer versus noncancer and high-grade versus low-grade classification tasks, respectively. This system represents a first step toward automated cancer quantification on prostate digital histopathology imaging, which could pave the way for more accurately informed postprostatectomy patient care.", "title": "" }, { "docid": "838ef5791a8c127f11a53406cf5599d0", "text": "Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "title": "" }, { "docid": "6f8e565aff657cbc1b65217d72ead3ab", "text": "This paper explores patterns of adoption and use of information and communications technology (ICT) by small and medium sized enterprises (SMEs) in the southwest London and Thames Valley region of England. The paper presents preliminary results of a survey of around 400 SMEs drawn from four economically significant sectors in the region: food processing, transport and logistics, media and Internet services. The main objectives of the study were to explore ICT adoption and use patterns by SMEs, to identify factors enabling or inhibiting the successful adoption and use of ICT, and to explore the effectiveness of government policy mechanisms at national and regional levels. While our main result indicates a generally favourable attitude to ICT amongst the SMEs surveyed, it also suggests a failure to recognise ICT’s strategic potential. A surprising result was the overwhelming ignorance of regional, national and European Union wide policy initiatives to support SMEs. This strikes at the very heart of regional, national and European policy that have identified SMEs as requiring specific support mechanisms. Our findings from one of the UK’s most productive regions therefore have important implications for policy aimed at ICT adoption and use by SMEs.", "title": "" }, { "docid": "412b616f4fcb9399c8220c542ecac83e", "text": "Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.", "title": "" }, { "docid": "0dfd5345c2dc3fe047dcc635760ffedd", "text": "This paper presents a fast, joint spatial- and Doppler velocity-based, probabilistic approach for ego-motion estimation for single and multiple radar-equipped robots. The normal distribution transform is used for the fast and accurate position matching of consecutive radar detections. This registration technique is successfully applied to laser-based scan matching. To overcome discontinuities of the original normal distribution approach, an appropriate clustering technique provides a globally smooth mixed-Gaussian representation. It is shown how this matching approach can be significantly improved by taking the Doppler information into account. The Doppler information is used in a density-based approach to extend the position matching to a joint likelihood optimization function. Then, the estimated ego-motion maximizes this function. Large-scale real world experiments in an urban environment using a 77 GHz radar show the robust and accurate ego-motion estimation of the proposed algorithm. In the experiments, comparisons are made to state-of-the-art algorithms, the vehicle odometry, and a high-precision inertial measurement unit.", "title": "" }, { "docid": "bee4b2dfab47848e8429d4b4617ec9e5", "text": "Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).", "title": "" }, { "docid": "46df34ed9fb6abcc0e6250972fca1faa", "text": "Reliable, scalable and secured framework for predicting Heart diseases by mining big data is designed. Components of Apache Hadoop are used for processing of big data used for prediction. For increasing the performance, scalability, and reliability Hadoop clusters are deployed on Google Cloud Storage. Mapreduce based Classification via clustering method is proposed for efficient classification of instances using reduced attributes. Mapreduce based C 4.5 decision tree algorithm is improved and implemented to classify the instances. Datasets are analyzed on WEKA (Waikato Environment for Knowledge Analysis) and Hadoop. Classification via clustering method performs classification with 98.5% accuracy on WEKA with reduced attributes. On Mapreduce paradigm using this approach execution time is improved. With clustered instances 49 nodes of decision tree are reduced to 32 and execution time of Mapreduce program is reduced from 113 seconds to 84 seconds. Mapreduce based decision trees present classification of instances more accurately as compared to WEKA based decision trees.", "title": "" }, { "docid": "c5b9053b1b22d56dd827009ef529004d", "text": "An integrated receiver with high sensitivity and low walk error for a military purpose pulsed time-of-flight (TOF) LADAR system is proposed. The proposed receiver adopts a dual-gain capacitive-feedback TIA (C-TIA) instead of widely used resistive-feedback TIA (R-TIA) to increase the sensitivity. In addition, a new walk-error improvement circuit based on a constant-delay detection method is proposed. Implemented in 0.35 μm CMOS technology, the receiver achieves an input-referred noise current of 1.36 pA/√Hz with bandwidth of 140 MHz and minimum detectable signal (MDS) of 10 nW with a 5 ns pulse at SNR=3.3, maximum walk-error of 2.8 ns, and a dynamic range of 1:12,000 over the operating temperature range of -40 °C to +85 °C.", "title": "" }, { "docid": "31d66211511ae35d71c7055a2abf2801", "text": "BACKGROUND\nPrevious evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.\n\n\nCONCLUSION/SIGNIFICANCE\nCognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.", "title": "" }, { "docid": "b5bb4e9e131cb4895ee1b22c60f9e0c8", "text": "This paper proposes an eye state detection system using Haar Cascade Classifier and Circular Hough Transform. Our proposed system first detects the face and then the eyes using Haar Cascade Classifiers, which differentiate between opened and closed eyes. Circular Hough Transform (CHT) is used to detect the circular shape of the eye and make sure that the eye is detected correctly by the classifiers. The accuracy of the eye detection is 98.56% on our database which contains 2856 images for opened eye and 2384 images for closed eye. The system works on several stages and is fully automatic. The eye state detection system was tested by several people, and the accuracy of the proposed system is 96.96%.", "title": "" }, { "docid": "ff59d1ec0c3eb11b3201e5708a585ca4", "text": "In this paper, we described our system for Knowledge Base Acceleration (KBA) Track at TREC 2013. The KBA Track has two tasks, CCR and SSF. Our approach consists of two major steps: selecting documents and extracting slot values. Selecting documents is to look for and save the documents that mention the entities of interest. The second step involves with generating seed patterns to extract the slot values and computing confidence score.", "title": "" } ]
scidocsrr
d98f68cc59d1386a2b1207517090fc87
Improving Question Answering with External Knowledge
[ { "docid": "e79679c3ed82c1c7ab83cfc4d6e0280e", "text": "Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).", "title": "" }, { "docid": "d5d03cdfd3a6d6c2b670794d76e91c8e", "text": "We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students’ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/ ̃glai1/data/race/ and the code is available at https://github.com/ cheezer/RACE_AR_baselines.", "title": "" }, { "docid": "fe39547650623fbf86be3da46a6c5a8b", "text": "This paper describes our system for SemEval2018 Task 11: Machine Comprehension using Commonsense Knowledge (Ostermann et al., 2018b). We use Threeway Attentive Networks (TriAN) to model interactions between the passage, question and answers. To incorporate commonsense knowledge, we augment the input with relation embedding from the graph of general knowledge ConceptNet (Speer et al., 2017). As a result, our system achieves state-of-the-art performance with 83.95% accuracy on the official test data. Code is publicly available at https://github.com/ intfloat/commonsense-rc.", "title": "" }, { "docid": "8f3d86a21b8a19c7d3add744c2e5e202", "text": "Question answering (QA) systems are easily distracted by irrelevant or redundant words in questions, especially when faced with long or multi-sentence questions in difficult domains. This paper introduces and studies the notion of essential question terms with the goal of improving such QA solvers. We illustrate the importance of essential question terms by showing that humans’ ability to answer questions drops significantly when essential terms are eliminated from questions. We then develop a classifier that reliably (90% mean average precision) identifies and ranks essential terms in questions. Finally, we use the classifier to demonstrate that the notion of question term essentiality allows state-of-the-art QA solvers for elementary-level science questions to make better and more informed decisions, improving performance by up to 5%. We also introduce a new dataset of over 2,200 crowd-sourced essential terms annotated science questions.", "title": "" } ]
[ { "docid": "69e87ea7f07f96088486b7dd9105841b", "text": "When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.", "title": "" }, { "docid": "432ff163e4dded948aa5a27aa440cd30", "text": "Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates’ computer anxiety, computer self-efficacy, and reported use of and attitudes toward the Internet. This study also examined differences in computer anxiety, computer selfefficacy, attitudes toward the Internet and reported use of the Internet for undergraduates with different demographic variables. The findings suggest that the undergraduates had moderate computer anxiousness, medium attitudes toward the Internet, and high computer self-efficacy and used the Internet extensively for educational purposes such as doing research, downloading electronic resources and e-mail communications. This study challenges the long perceived male bias in the computer environment and supports recent studies that have identified greater gender equivalence in interest, use, and skills levels. However, there were differences in undergraduates’ Internet usage levels based on the discipline of study. Furthermore, higher levels of Internet usage did not necessarily translate into better computer self-efficacy among the undergraduates. A more important factor in determining computer self-efficacy could be the discipline of study and undergraduates studying computer related disciplines appeared to have higher self-efficacy towards computers and the Internet. Undergraduates who used the Internet more often may not necessarily feel more comfortable using them. Possibly, other factors such as the types of application used, the purpose for using, and individual satisfaction could also influence computer self-efficacy and computer anxiety. However, although Internet usage levels may not have any impact on computer self-efficacy, higher usage of the Internet does seem to decrease the levels of computer anxiety among the undergraduates. Undergraduates with lower computer anxiousness demonstrated more positive attitudes toward the Internet in this study.", "title": "" }, { "docid": "e37b3a68c850d1fb54c9030c22b5792f", "text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.", "title": "" }, { "docid": "602ccb25257c6ce6c0bca2cb81c00628", "text": "The detection and tracking of moving vehicles is a necessity for collision-free navigation. In natural unstructured environments, motion-based detection is challenging due to low signal to noise ratio. This paper describes our approach for a 14 km/h fast autonomous outdoor robot that is equipped with a Velodyne HDL-64E S2 for environment perception. We extend existing work that has proven reliable in urban environments. To overcome the unavailability of road network information for background separation, we introduce a foreground model that incorporates geometric as well as temporal cues. Local shape estimates successfully guide vehicle localization. Extensive evaluation shows that the system works reliably and efficiently in various outdoor scenarios without any prior knowledge about the road network. Experiments with our own sensor as well as on publicly available data from the DARPA Urban Challenge revealed more than 96% correctly identified vehicles.", "title": "" }, { "docid": "8b7f931e800cd1ae810453ecbc35b225", "text": "In this paper we present empirical results from a study examining the effects of antenna diversity and placement on vehicle-to-vehicle link performance in vehicular ad hoc networks. The experiments use roof- and in-vehicle mounted omni-directional antennas and IEEE 802.11a radios operating in the 5 GHz band, which is of interest for planned inter-vehicular communication standards. Our main findings are two-fold. First, we show that radio reception performance is sensitive to antenna placement in the 5 Ghz band. Second, our results show that, surprisingly, a packet level selection diversity scheme using multiple antennas and radios, multi-radio packet selection (MRPS), improves performance not only in a fading channel but also in line-of-sight conditions. This is due to propagation being affected by car geometry, leading to the highly non-uniform antenna patterns. These patterns are very sensitive to the exact antenna position on the roof, for example at a transmit power of 40 mW the line-of-sight communication range varied between 50 and 250 m depending on the orientation of the cars. These findings have implications for vehicular MAC protocol design. Protocols may have to cope with an increased number of hidden nodes due to the directional antenna patterns. However, car makers can reduce these effects through careful antenna placement and diversity.", "title": "" }, { "docid": "fe48a551dfbe397b7bcf52e534dfcf00", "text": "This meta-analysis of 12 dependent variables from 9 quantitative studies comparing music to no-music conditions during treatment of children and adolescents with autism resulted in an overall effect size of d =.77 and a mean weighted correlation of r =.36 (p =.00). Since the confidence interval did not include 0, results were considered to be significant. All effects were in a positive direction, indicating benefits of the use of music in intervention. The homogeneity Q value was not significant (p =.83); therefore, results of included studies are considered to be homogeneous and explained by the overall effect size. The significant effect size, combined with the homogeneity of the studies, leads to the conclusion that all music intervention, regardless of purpose or implementation, has been effective for children and adolescents with autism. Included studies are described in terms of type of dependent variables measured; theoretical approach; number of subjects in treatment sessions; participation in and use, selection, and presentation of music; researcher discipline; published or unpublished source; and subject age. Clinical implications as well as recommendations for future research are discussed.", "title": "" }, { "docid": "73e4f93a46d8d66599aaaeaf71c8efe2", "text": "The galvanometer-based scanners (GS) are oscillatory optical systems utilized in high-end biomedical technologies. From a control point-of-view the GSs are mechatronic systems (mainly positioning servo-systems) built usually in a close loop structure and controlled by different control algorithms. The paper presents a Model based Predictive Control (MPC) solution for the mobile equipment (moving magnet and galvomirror) of a GS. The development of a high-performance control solution is based to a basic closed loop GS which consists of a PD-L1 controller and a servomotor. The mathematical model (MM) and the parameters of the basic construction are identified using a theoretical approach followed by an experimental identification. The equipment is used in our laboratory for better dynamical performances for biomedical imaging systems. The control solutions proposed are supported by simulations carried out in Matlab/Simulink.", "title": "" }, { "docid": "cb7dda8f4059e5a66e4a6e26fcda601e", "text": "Purpose – This UK-based research aims to build on the US-based work of Keller and Aaker, which found a significant association between “company credibility” (via a brand’s “expertise” and “trustworthiness”) and brand extension acceptance, hypothesising that brand trust, measured via two correlate dimensions, is significantly related to brand extension acceptance. Design/methodology/approach – Discusses brand extension and various prior, validated influences on its success. Focuses on the construct of trust and develops hypotheses about the relationship of brand trust with brand extension acceptance. The hypotheses are then tested on data collected from consumers in the UK. Findings – This paper, using 368 consumer responses to nine, real, low involvement UK product and service brands, finds support for a significant association between the variables, comparable in strength with that between media weight and brand share, and greater than that delivered by the perceived quality level of the parent brand. Originality/value – The research findings, which develop a sparse literature in this linkage area, are of significance to marketing practitioners, since brand trust, already associated with brand equity and brand loyalty, and now with brand extension, needs to be managed and monitored with care. The paper prompts further investigation of the relationship between brand trust and brand extension acceptance in other geographic markets and with other higher involvement categories.", "title": "" }, { "docid": "1ea21d88740aa6b2712205823f141e57", "text": "AIM\nOne of the critical aspects of esthetic dentistry is creating geometric or mathematical proportions to relate the successive widths of the anterior teeth. The golden proportion, the recurring esthetic dental (RED) proportion, and the golden percentage are theories introduced in this field. The aim of this study was to investigate the existence of the golden proportion, RED proportion, and the golden percentage between the widths of the maxillary anterior teeth in individuals with natural dentition.\n\n\nMETHODS AND MATERIALS\nStandardized frontal images of 376 dental student smiles were captured. The images were transferred to a personal computer, the widths of the maxillary anterior teeth were measured, and calculations were made according to each of the above mentioned theories. The data were statistically analyzed using paired student T-test (level of significance P<0.05).\n\n\nRESULTS\nThe golden proportion was found to be accurate between the width of the right central and lateral incisors in 31.3% of men and 27.1% of women. The values of the RED proportion were not constant, and the farther the one moves distally from the midline the higher the values. Furthermore, the results revealed the golden percentage was rather constant in terms of relative tooth width. The width of the central incisor represents 23%, the lateral incisor 15%, and the canine 12% of the width of the six maxillary anterior teeth as viewed from the front.\n\n\nCONCLUSIONS\nBoth the golden proportion and the RED proportion are unsuitable methods to relate the successive widths of the maxillary anterior teeth. However, the golden percentage theory seems to be applicable to relate the successive widths of the maxillary anterior teeth if percentages are adjusted taking into consideration the ethnicity of the population.", "title": "" }, { "docid": "543a0cd5ac9aae173a1af5c3215b002f", "text": "Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P ), a novel situated question answering model that can use background knowledge and global features of the question/environment interpretation while retaining efficient approximate inference. Our key insight is to treat semantic parses as probabilistic programs that execute nondeterministically and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 science diagram questions, outperforming several competitive classical and neural baselines.", "title": "" }, { "docid": "bbc984f02b81ee66d7dc617ed34a7e98", "text": "Packet losses are common in data center networks, may be caused by a variety of reasons (e.g., congestion, blackhole), and have significant impacts on application performance and network operations. Thus, it is important to provide fast detection of packet losses independent of their root causes. We also need to capture both the locations and packet header information of the lost packets to help diagnose and mitigate these losses. Unfortunately, existing monitoring tools that are generic in capturing all types of network events often fall short in capturing losses fast with enough details and low overhead. Due to the importance of loss in data centers, we propose a specific monitoring system designed for loss detection. We propose LossRadar, a system that can capture individual lost packets and their detailed information in the entire network on a fine time scale. Our extensive evaluation on prototypes and simulations demonstrates that LossRadar is easy to implement in hardware switches, achieves low memory and bandwidth overhead, while providing detailed information about individual lost packets. We also build a loss analysis tool that demonstrates the usefulness of LossRadar with a few example applications.", "title": "" }, { "docid": "ee532e8bb51a7b49506df59bd9ad3282", "text": "People learn from tests. Providing tests often enhances retention more than additional study opportunities, but is this testing effect mediated by processes related to retrieval that are fundamentally different from study processes? Some previous studies have reported that testing enhances retention relative to additional studying, but only after a relatively long retention interval. To the extent that this interaction with retention interval dissociates the effects of studying and testing, it may provide crucial evidence for different underlying processes. However, these findings can be questioned because of methodological differences between the study and the test conditions. In two experiments, we eliminated or minimized the confounds that rendered the previous findings equivocal and still obtained the critical interaction. Our results strengthen the evidence for the involvement of different processes underlying the effects of studying and testing, and support the hypothesis that the testing effect is grounded in retrieval-related processes.", "title": "" }, { "docid": "bff21b4a0bc4e7cc6918bc7f107a5ca5", "text": "This paper discusses driving system design based on traffic rules. This allows fully automated driving in an environment with human drivers, without necessarily changing equipment on other vehicles or infrastructure. It also facilitates cooperation between the driving system and the host driver during highly automated driving. The concept, referred to as legal safety, is illustrated for highly automated driving on highways with distance keeping, intelligent speed adaptation, and lane-changing functionalities. Requirements by legal safety on perception and control components are discussed. This paper presents the actual design of a legal safety decision component, which predicts object trajectories and calculates optimal subject trajectories. System implementation on automotive electronic control units and results on vehicle and simulator are discussed.", "title": "" }, { "docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf", "text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.", "title": "" }, { "docid": "16fec520bf539ab23a5164ffef5561b4", "text": "This article traces the major trends in TESOL methods in the past 15 years. It focuses on the TESOL profession’s evolving perspectives on language teaching methods in terms of three perceptible shifts: (a) from communicative language teaching to task-based language teaching, (b) from method-based pedagogy to postmethod pedagogy, and (c) from systemic discovery to critical discourse. It is evident that during this transitional period, the profession has witnessed a heightened awareness about communicative and task-based language teaching, about the limitations of the concept of method, about possible postmethod pedagogies that seek to address some of the limitations of method, about the complexity of teacher beliefs that inform the practice of everyday teaching, and about the vitality of the macrostructures—social, cultural, political, and historical—that shape the microstructures of the language classroom. This article deals briefly with the changes and challenges the trend-setting transition seems to be bringing about in the profession’s collective thought and action.", "title": "" }, { "docid": "9420760d6945440048cee3566ce96699", "text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.", "title": "" }, { "docid": "76502e21fbb777a3442928897ef271f0", "text": "Staphylococcus saprophyticus (S. saprophyticus) is a Gram-positive, coagulase-negative facultative bacterium belongs to Micrococcaceae family. It is a unique uropathogen associated with uncomplicated urinary tract infections (UTIs), especially cystitis in young women. Young women are very susceptible to colonize this organism in the urinary tracts and it is spread through sexual intercourse. S. saprophyticus is the second most common pathogen after Escherichia coli causing 10-20% of all UTIs in sexually active young women [13]. It contains the urease enzymes that hydrolyze the urea to produce ammonia. The urease activity is the main factor for UTIs infection. Apart from urease activity it has numerous transporter systems to adjust against change in pH, osmolarity, and concentration of urea in human urine [2]. After severe infections, it causes various complications such as native valve endocarditis [4], pyelonephritis, septicemia, [5], and nephrolithiasis [6]. About 150 million people are diagnosed with UTIs each year worldwide [7]. Several virulence factors includes due to the adherence to urothelial cells by release of lipoteichoic acid is a surface-associated adhesion amphiphile [8], a hemagglutinin that binds to fibronectin and hemagglutinates sheep erythrocytes [9], a hemolysin; and production of extracellular slime are responsible for resistance properties of S. saprophyticus [1]. Based on literature, S. saprophyticus strains are susceptible to vancomycin, rifampin, gentamicin and amoxicillin-clavulanic, while resistance to other antimicrobials such as erythromycin, clindamycin, fluoroquinolones, chloramphenicol, trimethoprim/sulfamethoxazole, oxacillin, and Abstract", "title": "" }, { "docid": "ceef658faa94ad655521ece5ac5cba1d", "text": "We propose learning a semantic visual feature representation by training a neural network supervised solely by point and object trajectories in video sequences. Currently, the predominant paradigm for learning visual features involves training deep convolutional networks on an image classification task using very large human-annotated datasets, e.g. ImageNet. Though effective as supervision, semantic image labels are costly to obtain. On the other hand, under high enough frame rates, frame-to-frame associations between the same 3D physical point or an object can be established automatically. By transitivity, such associations grouped into tracks can relate object/point appearance across large changes in pose, illumination and camera viewpoint, providing a rich source of invariance that can be used for training. We train a siamese network we call it AssociationNet to discriminate between correct and wrong associations between patches in different frames of a video sequence. We show that AssociationNet learns useful features when used as pretraining for object recognition in static images, and outperforms random weight initialization and alternative pretraining methods.", "title": "" }, { "docid": "d00957d93af7b2551073ba84b6c0f2a6", "text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn", "title": "" }, { "docid": "1c3cf3ccdb3b7129c330499ca909b193", "text": "Procedural methods for animating turbulent fluid are often preferred over simulation, both for speed and for the degree of animator control. We offer an extremely simple approach to efficiently generating turbulent velocity fields based on Perlin noise, with a formula that is exactly incompressible (necessary for the characteristic look of everyday fluids), exactly respects solid boundaries (not allowing fluid to flow through arbitrarily-specified surfaces), and whose amplitude can be modulated in space as desired. In addition, we demonstrate how to combine this with procedural primitives for flow around moving rigid objects, vortices, etc.", "title": "" } ]
scidocsrr
606869cd81b4aaf23f4f05117f8765c4
Lexico-syntactic text simplification and compression with typed dependencies
[ { "docid": "52ebff6e9509b27185f9f12bc65d86f8", "text": "We address the problem of simplifying Portuguese texts at the sentence level by treating it as a \"translation task\". We use the Statistical Machine Translation (SMT) framework to learn how to translate from complex to simplified sentences. Given a parallel corpus of original and simplified texts, aligned at the sentence level, we train a standard SMT system and evaluate the \"translations\" produced using both standard SMT metrics like BLEU and manual inspection. Results are promising according to both evaluations, showing that while the model is usually overcautious in producing simplifications, the overall quality of the sentences is not degraded and certain types of simplification operations, mainly lexical, are appropriately captured.", "title": "" }, { "docid": "a93969b08efbc81c80129790d93e39de", "text": "Text simplification aims to rewrite text into simpler versions, and thus make information accessible to a broader audience. Most previous work simplifies sentences using handcrafted rules aimed at splitting long sentences, or substitutes difficult words using a predefined dictionary. This paper presents a datadriven model based on quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. We describe how such a grammar can be induced from Wikipedia and propose an integer linear programming model for selecting the most appropriate simplification from the space of possible rewrites generated by the grammar. We show experimentally that our method creates simplifications that significantly reduce the reading difficulty of the input, while maintaining grammaticality and preserving its meaning.", "title": "" }, { "docid": "3909409a40aef1d1b6fea5b8a920a707", "text": "Lexical and syntactic simplification aim to make texts more accessible to certain audiences. Syntactic simplification uses either hand-crafted linguistic rules for deep syntactic transformations, or machine learning techniques to model simpler transformations. Lexical simplification performs a lookup for synonyms followed by context and/or frequency-based models. In this paper we investigate modelling both syntactic and lexical simplification through the learning of general tree transduction rules. Experiments with the Simple English Wikipedia corpus show promising results but highlight the need for clever filtering strategies to remove noisy transformations. Resumo. A simplificação em nı́vel lexical e sintático objetiva tornar textos mais acessı́veis a certos públicos-alvo. Simplificação em nı́vel sintático usa regras confeccionadas manualmente para empregar transformações sintáticas, ou técnicas de aprendizado de máquina para modelar transformações mais simples. Simplificação em nı́vel lexical emprega busca por sinônimos para palavras complexas seguida por análise de contexto e/ou busca em modelos de frequência de palavras. Neste trabalho investiga-se a modelagem de ambas estratégias de simplificação em nı́vel sintático e lexical pelo aprendizado de regras através da transdução de árvores. Experimentos com dados da Simple English Wikipedia mostram resultados promissores, porém destacam a necessidade de estratégias inteligentes de filtragem para remover transformações ruidosas.", "title": "" } ]
[ { "docid": "c5bf370e5369fb30905b5e5f73528b6c", "text": "Mars rovers have to this point been almost completely reliant on the solar panel/rechargeable battery combination as a source of power. Curiosity, currently en route, relies on radio isotope decay as its source of electrical power. Given the limited amount of space available for solar panels and that the wattage available from radioisotope decay is limited; power is clearly a critical resource for any rover. The goal of this work is to estimate the energy cost of traversing terrains of various properties. Knowledge of energy costs for terrain traversal will allow for more efficient path planning enabling rovers to have longer periods of activity. Our system accepts grid-based terrain elevation data in the form of a Digital Elevation Model (DEM) along with rover and soil parameters, and uses a newly developed model of the most common rover suspension design (rocker-bogie) along with a terramechanics-based wheel-soil interaction model to build a map of the estimated torque required by each wheel to move the rover to each adjacent terrain square. Future work will involve real world testing and verification of our model.", "title": "" }, { "docid": "065ca3deb8cb266f741feb67e404acb5", "text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet", "title": "" }, { "docid": "2742db8262616f2b69d92e0066e6930c", "text": "Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.", "title": "" }, { "docid": "a0f4b7f3f9f2a5d430a3b8acead2b746", "text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse", "title": "" }, { "docid": "9624ce8061b8476d7fe8d61ef3b565b8", "text": "The availability of high-resolution remote sensing (HRRS) data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN) can be applied to multispectral orthoimagery and a digital surface model (DSM) of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water), and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.", "title": "" }, { "docid": "9b13225d4a51419578362a38f22b9c9c", "text": "Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.", "title": "" }, { "docid": "cfec098f84e157a2e12f0ff40551c977", "text": "In this paper, an online news recommender system for the popular social network, Facebook, is described. This system provides daily newsletters for communities on Facebook. The system fetches the news articles and filters them based on the community description to prepare the daily news digest. Explicit survey feedback from the users show that most users found the application useful and easy to use. They also indicated that they could get some community specific articles that they would not have got otherwise.", "title": "" }, { "docid": "bf2065f6c04f566110667a22a9d1b663", "text": "Casticin, a polymethoxyflavone occurring in natural plants, has been shown to have anticancer activities. In the present study, we aims to investigate the anti-skin cancer activity of casticin on melanoma cells in vitro and the antitumor effect of casticin on human melanoma xenografts in nu/nu mice in vivo. A flow cytometric assay was performed to detect expression of viable cells, cell cycles, reactive oxygen species production, levels of [Formula: see text] and caspase activity. A Western blotting assay and confocal laser microscope examination were performed to detect expression of protein levels. In the in vitro studies, we found that casticin induced morphological cell changes and DNA condensation and damage, decreased the total viable cells, and induced G2/M phase arrest. Casticin promoted reactive oxygen species (ROS) production, decreased the level of [Formula: see text], and promoted caspase-3 activities in A375.S2 cells. The induced G2/M phase arrest indicated by the Western blotting assay showed that casticin promoted the expression of p53, p21 and CHK-1 proteins and inhibited the protein levels of Cdc25c, CDK-1, Cyclin A and B. The casticin-induced apoptosis indicated that casticin promoted pro-apoptotic proteins but inhibited anti-apoptotic proteins. These findings also were confirmed by the fact that casticin promoted the release of AIF and Endo G from mitochondria to cytosol. An electrophoretic mobility shift assay (EMSA) assay showed that casticin inhibited the NF-[Formula: see text]B binding DNA and that these effects were time-dependent. In the in vivo studies, results from immuno-deficient nu/nu mice bearing the A375.S2 tumor xenograft indicated that casticin significantly suppressed tumor growth based on tumor size and weight decreases. Early G2/M arrest and mitochondria-dependent signaling contributed to the apoptotic A375.S2 cell demise induced by casticin. In in vivo experiments, A375.S2 also efficaciously suppressed tumor volume in a xenotransplantation model. Therefore, casticin might be a potential therapeutic agent for the treatment of skin cancer in the future.", "title": "" }, { "docid": "8e5cbfe1056a75b1116c93d780c00847", "text": "We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.", "title": "" }, { "docid": "68a77338227063ce4880eb0fe98a3a92", "text": "Mammalian microRNAs (miRNAs) have recently been identified as important regulators of gene expression, and they function by repressing specific target genes at the post-transcriptional level. Now, studies of miRNAs are resolving some unsolved issues in immunology. Recent studies have shown that miRNAs have unique expression profiles in cells of the innate and adaptive immune systems and have pivotal roles in the regulation of both cell development and function. Furthermore, when miRNAs are aberrantly expressed they can contribute to pathological conditions involving the immune system, such as cancer and autoimmunity; they have also been shown to be useful as diagnostic and prognostic indicators of disease type and severity. This Review discusses recent advances in our understanding of both the intended functions of miRNAs in managing immune cell biology and their pathological roles when their expression is dysregulated.", "title": "" }, { "docid": "9027d974a3bb5c48c1d8f3103e6035d6", "text": "The creation of memories about real-life episodes requires rapid neuronal changes that may appear after a single occurrence of an event. How is such demand met by neurons in the medial temporal lobe (MTL), which plays a fundamental role in episodic memory formation? We recorded the activity of MTL neurons in neurosurgical patients while they learned new associations. Pairs of unrelated pictures, one of a person and another of a place, were used to construct a meaningful association modeling the episodic memory of meeting a person in a particular place. We found that a large proportion of responsive MTL neurons expanded their selectivity to encode these specific associations within a few trials: cells initially responsive to one picture started firing to the associated one but not to others. Our results provide a plausible neural substrate for the inception of associations, which are crucial for the formation of episodic memories.", "title": "" }, { "docid": "5f1684f33bb1821cfa6470c470feceea", "text": "In this paper, a new approach is proposed for automated software maintenance. The tool is able to perform 26 different refactorings. It also contains a large selection of metrics to measure the impact of the refactorings on the software and six different search based optimization algorithms to improve the software. This tool contains both monoobjective and multi-objective search techniques for software improvement and is fully automated. The paper describes the various capabilities of the tool, the unique aspects of it, and also presents some research results from experimentation. The individual metrics are tested across five different codebases to deduce the most effective metrics for general quality improvement. It is found that the metrics that relate to more specific elements of the code are more useful for driving change in the search. The mono-objective genetic algorithm is also tested against the multi-objective algorithm to see how comparable the results gained are with three separate objectives. When comparing the best solutions of each individual objective the multi-objective approach generates suitable improvements in quality in less time, allowing for rapid maintenance cycles.", "title": "" }, { "docid": "5221a4982626902388540ba95f5a57c3", "text": "In this chapter, event-based control approaches for microalgae culture in industrial reactors are evaluated. Those control systems are applied to regulate the microalgae culture growth conditions such as pH and dissolved oxygen concentration. The analyzed event-based control systems deal with sensor and actuator deadbands approaches in order to provide the desired properties of the controller. Additionally, a selective event-based scheme is evaluated for simultaneous control of pH and dissolved oxygen. In such configurations, the event-based approach provides the possibility to adapt the control system actions to the dynamic state of the controlled bioprocess. In such a way, the event-based control algorithm allows to establish a tradeoff between control performance and number of process update actions. This fact can be directly related with reduction of CO2 injection times, what is also reflected in CO2 losses. The application of selective event-based scheme allows the improved biomass productivity, since the controlled variables are kept within the limits for an optimal photosynthesis rate. Moreover, such a control scheme allows effective CO2 utilization and aeration system energy minimization. The analyzed control system configurations are evaluated for both tubular and raceway photobioreactors to proove its viability for different reactor configurations as well as control system objectives. Additionally, control performance indexes have been used to show the efficiency of the event-based control approaches. The obtained results demonA. Pawlowski (✉) ⋅ S. Dormido Department of Computer Science and Automatic Control, UNED, Madrid, Spain e-mail: a.pawlowski@dia.uned.es S. Dormido e-mail: sdormido@dia.uned.es J.L. Guzmán ⋅ M. Berenguel Department of Informatics, University of Almería, ceiA3, CIESOL, Almería, Spain e-mail: joseluis.guzman@ual.es", "title": "" }, { "docid": "90b1d0a8670e74ff3549226acd94973e", "text": "Language identification is the task of automatically detecting the language(s) present in a document based on the content of the document. In this work, we address the problem of detecting documents that contain text from more than one language (multilingual documents). We introduce a method that is able to detect that a document is multilingual, identify the languages present, and estimate their relative proportions. We demonstrate the effectiveness of our method over synthetic data, as well as real-world multilingual documents collected from the web.", "title": "" }, { "docid": "a00cc13a716439c75a5b785407b02812", "text": "A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs.", "title": "" }, { "docid": "5305e147b2aa9646366bc13deb0327b0", "text": "This longitudinal case-study aimed at examining whether purposely teaching for the promotion of higher order thinking skills enhances students’ critical thinking (CT), within the framework of science education. Within a pre-, post-, and post–post experimental design, high school students, were divided into three research groups. The experimental group (n=57) consisted of science students who were exposed to teaching strategies designed for enhancing higher order thinking skills. Two other groups: science (n=41) and non-science majors (n=79), were taught traditionally, and acted as control. By using critical thinking assessment instruments, we have found that the experimental group showed a statistically significant improvement on critical thinking skills components and disposition towards critical thinking subscales, such as truth-seeking, open-mindedness, self-confidence, and maturity, compared with the control groups. Our findings suggest that if teachers purposely and persistently practice higher order thinking strategies for example, dealing in class with real-world problems, encouraging open-ended class discussions, and fostering inquiry-oriented experiments, there is a good chance for a consequent development of critical thinking capabilities.", "title": "" }, { "docid": "c90f5a4a34bb7998208c4c134bbab327", "text": "Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. SyntaxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.", "title": "" }, { "docid": "5fe589e370271246b55aa3b100595f01", "text": "Cluster-based distributed file systems generally have a single master to service clients and manage the namespace. Although simple and efficient, that design compromises availability, because the failure of the master takes the entire system down. Before version 2.0.0-alpha, the Hadoop Distributed File System (HDFS) -- an open-source storage, widely used by applications that operate over large datasets, such as MapReduce, and for which an uptime of 24x7 is becoming essential -- was an example of such systems. Given that scenario, this paper proposes a hot standby for the master of HDFS achieved by (i) extending the master's state replication performed by its check pointer helper, the Backup Node, and by (ii) introducing an automatic fail over mechanism. The step (i) took advantage of the message duplication technique developed by other high availability solution for HDFS named Avatar Nodes. The step (ii) employed another Hadoop software: ZooKeeper, a distributed coordination service. That approach resulted in small code changes, 1373 lines, not requiring external components to the Hadoop project. Thus, easing the maintenance and deployment of the file system. Compared to HDFS 0.21, tests showed that both in loads dominated by metadata operations or I/O operations, the reduction of data throughput is no more than 15% on average, and the time to switch the hot standby to active is less than 100 ms. Those results demonstrate the applicability of our solution to real systems. We also present related work on high availability for other file systems and HDFS, including the official solution, recently included in HDFS 2.0.0-alpha.", "title": "" }, { "docid": "160058dae12ea588352f5015483081fc", "text": "Semiotics is the study of signs. Signs take the form of words, images, sounds, odours, flavours, acts or objects but such things have no intrinsic meaning and become signs only when we invest them with meaning. ‘Nothing is a sign unless it is interpreted as a sign,’ declares Peirce (Peirce, 1931). The two dominant models of a sign are the linguist Ferdinand de Saussure and the philosopher Charles Sanders Peirce. This paper attempts to study the role of semiotics in linguistics. How signs play an important role in studying the language? Index: Semioticstheory of signs and symbols Semanticsstudy of sentences Denotataan actual object referred to by a linguistic expression Divergentmove apart in different directions Linguisticsscientific study of language --------------------------------------------------------------------------------------------Introduction: Semiotics or semiology is the study of sign processes or signification and communication, signs and symbols. It is divided into the three following branches:  Semantics: Relation between signs and the things to which they refer; their denotata  Syntactics: Relations among signs in formal structures  Pragmatics: Relation between signs and their effects on people who use them Syntactics is the branch of semiotics that deals with the formal properties of signs and symbols. It deals with the rules that govern how words are combined to form phrases and sentences. According to Charles Morris “semantics deals with the relation of signs to their designate and the objects which they may or do denote” (Foundations of the theory of science, 1938); and, pragmatics deals with the biotic aspects of semiosis, that is, with all the psychological, biological and sociological phenomena which occur in the functioning of signs. The term, which was spelled semeiotics was first used in English by Henry Stubbes in a very precise sense to denote the branch of medical science relating to the interpretation of signs. Semiotics is not widely institutionalized as an academic discipline. It is a field of study involving many different theoretical stances and methodological tools. One of the broadest definitions is that of Umberto Eco, who states that ‘semiotics is concerned with everything that can be taken as a sign’ (A Theory of Semiotics, 1979). Semiotics involves the study not only of what we refer to as ‘signs’ in everyday speech, but of anything which ‘stands for’ something else. In a semiotic sense, signs take the form of words, images, sounds, gestures and objects. Whilst for the linguist Saussure, ‘semiology’ was ‘a science which studies the role of signs as part of social life’, (Nature of the linguistics sign, 1916) for the philosopher Charles Pierce ‘semiotic’ was the ‘formal doctrine of signs’ which was closely related to logic. For him, ‘a sign... is something which stands to somebody for something in some respect or capacity’. He declared that ‘every thought is a sign.’ Literature review: Semiotics is often employed in the analysis of texts, although it is far more than just a mode of textual analysis. Here it should perhaps be noted that a ‘text’ can IJSER International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 2135", "title": "" }, { "docid": "0da2484d00456618806d67aabc7e97d2", "text": "Students’ academic performance is critical for educational institutions because strategic programs can be planned in improving or maintaining students’ performance during their period of studies in the institutions. The academic performance in this study is measured by their cumulative grade point average (CGPA) upon graduating. This study presents the work of data mining in predicting the drop out feature of students. This study applies decision tree technique to choose the best prediction and analysis. The list of students who are predicted as likely to drop out from college by data mining is then turned over to teachers and management for direct or indirect intervention. KeywordsIntruder; hacker; cracker; Intrusion detection; anomaly detection; verification; validation.", "title": "" } ]
scidocsrr
105c3c79375dfbe5472de149cc647732
A Nine-Phase 18-Slot 14-Pole Interior Permanent Magnet Machine With Low Space Harmonics for Electric Vehicle Applications
[ { "docid": "c67fd84601a528ea951fcf9952f46316", "text": "Electric vehicles make use of permanent-magnet (PM) synchronous traction motors for their high torque density and efficiency. A comparison between interior PM and surface-mounted PM (SPM) motors is carried out, in terms of performance at given inverter ratings. The results of the analysis, based on a simplified analytical model and confirmed by finite element (FE) analysis, show that the two motors have similar rated power but that the SPM motor has barely no overload capability, independently of the available inverter current. Moreover, the loss behavior of the two motors is rather different in the various operating ranges with the SPM one better at low speed due to short end connections but penalized at high speed by the need of a significant deexcitation current. The analysis is validated through FE simulation of two actual motor designs.", "title": "" } ]
[ { "docid": "4834d8ed2d60cb419b8dc9256911ba09", "text": "In this paper we present a complete measurement study that compares YouTube traffic generated by mobile devices (smart-phones,tablets) with traffic generated by common PCs (desktops, notebooks, netbooks). We investigate the users' behavior and correlate it with the system performance. Our measurements are performed using unique data sets which are collected from vantage points in nation-wide ISPs and University campuses from two countries in Europe and the U.S.\n Our results show that the user access patterns are similar across a wide range of user locations, access technologies and user devices. Users stick with default player configurations, e.g., not changing video resolution or rarely enabling full screen playback. Furthermore it is very common that users abort video playback, with 60% of videos watched for no more than 20% of their duration.\n We show that the YouTube system is highly optimized for PC access and leverages aggressive buffering policies to guarantee excellent video playback. This however causes 25%-39% of data to be unnecessarily transferred, since users abort the playback very early. This waste of data transferred is even higher when mobile devices are considered. The limited storage offered by those devices makes the video download more complicated and overall less efficient, so that clients typically download more data than the actual video size. Overall, this result calls for better system optimization for both, PC and mobile accesses.", "title": "" }, { "docid": "0d9d1a52a789dc5d09c7de24286465bb", "text": "Text to Speech Synthesis along with the Speech Recognition is widely used throughout the world to enhance the accessibility of the information and enable even the disabled persons to interact with the computers in order to get the potential benefit from this high-tech revolution. In this paper we introduce a bi-lingual novel algorithm for the synthesis of Urdu and Sindhi language text. The devised bi-lingual algorithm uses knowledge based approach along with the hybrid rule based and concatenative acoustic methods to provide efficient and accurate conversion of Urdu and Sindhi text into the high quality speech. The algorithm has been implemented in the VB programming language with a GUI based interface. The proposed system works with high accuracy and has a great potential to be used for variety of applications. The system is versatile enough and can be used for speech recognition also.", "title": "" }, { "docid": "7670affb6d1c1f6a59b544d24dc4d34d", "text": "During the past years the Cloud Computing offer has exponentially grown, with new Cloud providers, platforms and services being introduced in the IT market. The extreme variety of services, often providing non uniform and incompatible interfaces, makes it hard for customers to decide how to develop, or even worse to migrate, their own application into the Cloud. This situation can only get worse when customers want to exploit services from different providers, because of the portability and interoperability issues that often arise. In this paper we propose a uniform, integrated, machine-readable, semantic representation of cloud services, patterns, appliances and their compositions. Our approach aims at supporting the development of new applications for the Cloud environment, using semantic models and automatic reasoning to enhance potability and interoperability when multiple platforms are involved. In particular, the proposed reasoning procedure allows to: perform automatic discovery of Cloud services and Appliances; map between agnostic and vendor dependent Cloud Patterns and Services; automatically enrich the semantic knowledge base.", "title": "" }, { "docid": "f54139d2f153081bf87942b6f16ede63", "text": "Young children with visual impairments greatly benefit from tactile graphics (illustrations, images, puzzles, objects) during their learning processes. In this paper we present insight about using a 3D printed tactile picture book as a design probe. This has allowed us to identify and engage stakeholders in our research on improving the technical and human processes required for creating 3D printed tactile pictures, and cultivate a community of practice around these processes. We also contribute insight about how our inperson and digital methods of interacting with teachers, parents, and other professionals dedicated to supporting children with visual impairments contributes to research practices.", "title": "" }, { "docid": "d81cadc01ab599fd34d2ccfa8377de51", "text": "1. The Situation in Cognition The situated cognition movement in the cognitive sciences, like those sciences themselves, is a loose-knit family of approaches to understanding the mind and cognition. While it has both philosophical and psychological antecedents in thought stretching back over the last century (see Gallagher, this volume, Clancey, this volume,), it has developed primarily since the late 1970s as an alternative to, or a modification of, the then predominant paradigms for exploring the mind within the cognitive sciences. For this reason it has been common to characterize situated cognition in terms of what it is not, a cluster of \"anti-isms\". Situated cognition has thus been described as opposed to Platonism, Cartesianism, individualism, representationalism, and even", "title": "" }, { "docid": "76404b7c30a78cfd361aaf2fcc8091d3", "text": "The trend towards renewable, decentralized, and highly fluctuating energy suppliers (e.g. photovoltaic, wind power, CHP) introduces a tremendous burden on the stability of future power grids. By adding sophisticated ICT and intelligent devices, various Smart Grid initiatives work on concepts for intelligent power meters, peak load reductions, efficient balancing mechanisms, etc. As in the Smart Grid scenario data is inherently distributed over different, often non-cooperative parties, mechanisms for efficient coordination of the suppliers, consumers and intermediators is required in order to ensure global functioning of the power grid. In this paper, a highly flexible market platform is introduced for coordinating self-interested energy agents representing power suppliers, customers and prosumers. These energy agents implement a generic bidding strategy that can be governed by local policies. These policies declaratively represent user preferences or constraints of the devices controlled by the agent. Efficient coordination between the agents is realized through a market mechanism that incentivizes the agents to reveal their policies truthfully to the market. By knowing the agent’s policies, an efficient solution for the overall system can be determined. As proof of concept implementation the market platform D’ACCORD is presented that supports various market structures ranging from a single local energy exchange to a hierarchical energy market structure (e.g. as proposed in [10]).", "title": "" }, { "docid": "b5c34fba76c6114ba4d1b9e05a81da53", "text": "In this paper, we describe a 100Gbps capable OpenFlow based Science DMZ approach which combines adaptive IDS load balancing, dynamic traffic filtering and a novel IDS based technique to detect “good” traffic flows and forward around performance challenged institutional firewalls. Evaluation of this approach was conducted using GridFTP and Iperf3. Results indicate this is a viable approach to enhance science data transfer performance and reduce security hardware costs.", "title": "" }, { "docid": "d3f64c0691d6fe2d25d0e790f4bf312e", "text": "Computer-mediated crowdfunding is an emerging paradigm used by individuals to solicit funds from other individuals to realize projects. We are interested in how and why these platforms work and the impact they can have on what projects are realized and how they are disseminated in the world. In this paper, we report preliminary findings from a qualitative exploratory study of creators and funders on three popular crowdfunding platforms. In addition to anticipated extrinsic motivators, such as securing funding (creators) and consuming products and experiences (funders), our initial findings suggest that people are also motivated to participate because of social interactions realized through crowdfunding platforms, such as strengthening commitment to an idea through feedback (creators) and feelings of connectedness to a community with similar interests and ideals (funders). We present this research in the context of what we are calling motivational crowdwork, the investigation of motivation as it relates to online task outsourcing, and discuss ideas for ongoing work in this area. Author", "title": "" }, { "docid": "1314d95642bcb00529f8ef7288fcfce0", "text": "In this paper, we proposed a new MAP method more suitable for low signal to noise (SNR) measurements. Different from conventional MAP method, we assume the projection space as a Gibbs field and the penalty term we used was defined in projection space. The spatial resolution of our method was studied and we furthermore modified our method to obtain nearly spatial invariant resolution. Both simulated data and real clinical data were used to testify our method, and future work was discussed at the end of the paper.", "title": "" }, { "docid": "93278184377465ec1b870cd54dc49a93", "text": "We advocate the usage of 3D Zernike invariants as descriptors for 3D shape retrieval. The basis polynomials of this representation facilitate computation of invariants under rotation, translation and scaling. Some theoretical results have already been summarized in the past from the aspect of pattern recognition and shape analysis. We provide practical analysis of these invariants along with algorithms and computational details. Furthermore, we give a detailed discussion on influence of the algorithm parameters like the conversion into a volumetric function, number of utilized coefficients, etc. As is revealed by our study, the 3D Zernike descriptors are natural extensions of recently introduced spherical harmonics based descriptors. We conduct a comparison of 3D Zernike descriptors against these regarding computational aspects and shape retrieval performance using several quality measures and based on experiments on the Princeton Shape Benchmark.", "title": "" }, { "docid": "dfcc6b34f008e4ea9d560b5da4826f4d", "text": "The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. Keywords—Gesture recognition, Kinect, shadow play animation, VRPN.", "title": "" }, { "docid": "c99fd51e8577a5300389c565aebebdb3", "text": "Face Detection and Recognition is an important area in the field of substantiation. Maintenance of records of students along with monitoring of class attendance is an area of administration that requires significant amount of time and efforts for management. Automated Attendance Management System performs the daily activities of attendance analysis, for which face recognition is an important aspect. The prevalent techniques and methodologies for detecting and recognizing faces by using feature extraction tools like mean, standard deviation etc fail to overcome issues such as scaling, pose, illumination, variations. The proposed system provides features such as detection of faces, extraction of the features, detection of extracted features, and analysis of student’s attendance. The proposed system integrates techniques such as Principal Component Analysis (PCA) for feature extraction and voila-jones for face detection &Euclidian distance classifier. Faces are recognized using PCA, using the database that contains images of students and is used to recognize student using the captured image. Better accuracy is attained in results and the system takes into account the changes that occurs in the face over the period of time.", "title": "" }, { "docid": "ebf1827c0bca84320c8184fe795e4941", "text": "Computer experiments often require dense sweeps over input parameters to obtain a qualitative understanding of their response. Such sweeps can be prohibitively expensive, and are unnecessary in regions where the response is easy predicted; well-chosen designs could allow a mapping of the response with far fewer simulation runs. Thus, there is a need for computationally inexpensive surrogate models and an accompanying method for selecting small designs. We explore a general methodology for addressing this need that uses non-stationary Gaussian processes. Binary trees partition the input space to facilitate non-stationarity and a Bayesian interpretation provides an explicit measure of predictive uncertainty that can be used to guide sampling. Our methods are illustrated on several examples, including a motivating example involving computational fluid dynamics simulation of a NASA reentry vehicle.", "title": "" }, { "docid": "094c8e301f30bc6987c4e86aed44d7d7", "text": "Light field cameras have many advantages over traditional cameras, as they allow the user to change various camera settings after capture. However, capturing light fields requires a huge bandwidth to record the data: a modern light field camera can only take three images per second. This prevents current consumer light field cameras from capturing light field videos. Temporal interpolation at such extreme scale (10x, from 3 fps to 30 fps) is infeasible as too much information will be entirely missing between adjacent frames. Instead, we develop a hybrid imaging system, adding another standard video camera to capture the temporal information. Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps. We adopt a learning-based approach, which can be decomposed into two steps: spatio-temporal flow estimation and appearance estimation. The flow estimation propagates the angular information from the light field sequence to the 2D video, so we can warp input images to the target view. The appearance estimation then combines these warped images to output the final pixels. The whole process is trained end-to-end using convolutional neural networks. Experimental results demonstrate that our algorithm outperforms current video interpolation methods, enabling consumer light field videography, and making applications such as refocusing and parallax view generation achievable on videos for the first time.", "title": "" }, { "docid": "476bb80edf6c54f0b6415d19f027ee19", "text": "Spin-transfer torque (STT) switching demonstrated in submicron sized magnetic tunnel junctions (MTJs) has stimulated considerable interest for developments of STT switched magnetic random access memory (STT-MRAM). Remarkable progress in STT switching with MgO MTJs and increasing interest in STTMRAM in semiconductor industry have been witnessed in recent years. This paper will present a review on the progress in the intrinsic switching current density reduction and STT-MRAM prototype chip demonstration. Challenges to overcome in order for STT-MRAM to be a mainstream memory technology in future technology nodes will be discussed. Finally, potential applications of STT-MRAM in embedded and standalone memory markets will be outlined.", "title": "" }, { "docid": "c27eecae33fe87779d3452002c1bdf8a", "text": "When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents’ performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.", "title": "" }, { "docid": "b0548d8bbb379d996db2fc726b1b40ca", "text": "Despite their enhanced marketplace visibility, validity of wearable photoplethysmographic heart rate monitoring is scarce. Forty-seven healthy participants performed seven, 6-min exercise bouts and completed a valid skin type scale. Participants wore an Omron HR500U (OHR) and a Mio Alpha (MA), two commercial wearable photoplethysmographic heart rate monitors. Data were compared to a Polar RS800CX (PRS). Means and error were calculated between devices using minutes 2-5. Compared to PRS, MA data was significantly different in walking, biking (2.41 ± 3.99 bpm and 3.26 ± 11.38 bpm, p < 0.05) and weight lifting (23.30 ± 31.94 bpm, p < 0.01). OHR differed from PRS in walking (4.95 ± 7.53 bpm, p < 0.05) and weight lifting (4.67 ± 8.95 bpm, p < 0.05). MA during elliptical, stair climbing and biking conditions demonstrated a strong correlation between jogging speed and error (r = 0.55, p < 0.0001), and showed differences in participants with less photosensitive skin.", "title": "" }, { "docid": "4c950f762413d747600237d6b1b5eb2a", "text": "Authenticity of an image is an important issue in many social areas such as Journalism, Forensic investigation, Criminal investigation and Security services etc. and digital images can be easily manipulated with the help of sophisticated photo editing software and high-resolution digital cameras. So there is a requirement for the implementation of new powerful and efficient algorithms for forgery detection of a tampered images. The splicing is the common forgery in which two images are combine and make a single composite and the duplicated region is retouched by performing operations like edge blurring to get the appearance of the authentic image. In this paper, we have proposed a new computationally efficient algorithm for splicing (copy-create) forgery detection of an image using block matching method. The proposed method achieve an accuracy of 87.75% within a small processing time by modeling the threshold.", "title": "" }, { "docid": "e03795645ca53f6d4f903ff8ff227054", "text": "This paper presents the experimental validation and some application examples of the proposed wafer/pad friction models for linear chemical-mechanical planarization (CMP) processes in the companion paper. An experimental setup of a linear CMP polisher is first presented and some polishing processes are then designed for validation of the wafer/pad friction modeling and analysis. The friction torques of both the polisher spindle and roller systems are used to monitor variations of the friction coefficient in situ . Verification of the friction model under various process parameters is presented. Effects of pad conditioning and the wafer film topography on wafer/pad friction are experimentally demonstrated. Finally, several application examples are presented showing the use of the roller motor current measurement for real-time process monitoring and control.", "title": "" }, { "docid": "f95155d17f444bd520684333cb7df26b", "text": "The automatic determination of emotional state from multimedia content is an inherently challenging problem with a broad range of applications including biomedical diagnostics, multimedia retrieval, and human computer interfaces. The Audio Video Emotion Challenge (AVEC) 2016 provides a well-defined framework for developing and rigorously evaluating innovative approaches for estimating the arousal and valence states of emotion as a function of time. It presents the opportunity for investigating multimodal solutions that include audio, video, and physiological sensor signals. This paper provides an overview of our AVEC Emotion Challenge system, which uses multi-feature learning and fusion across all available modalities. It includes a number of technical contributions, including the development of novel high- and low-level features for modeling emotion in the audio, video, and physiological channels. Low-level features include modeling arousal in audio with minimal prosodic-based descriptors. High-level features are derived from supervised and unsupervised machine learning approaches based on sparse coding and deep learning. Finally, a state space estimation approach is applied for score fusion that demonstrates the importance of exploiting the time-series nature of the arousal and valence states. The resulting system outperforms the baseline systems [10] on the test evaluation set with an achieved Concordant Correlation Coefficient (CCC) for arousal of 0.770 vs 0.702 (baseline) and for valence of 0.687 vs 0.638. Future work will focus on exploiting the time-varying nature of individual channels in the multi-modal framework.", "title": "" } ]
scidocsrr
d28433f13403045ee842ad1045f3a49a
Asymmetric Algorithms and Symmetric Algorithms: A Review
[ { "docid": "fe944f1845eca3b0c252ada2c0306d61", "text": "Now a days sharing the information over internet is becoming a critical issue due to security problems. Hence more techniques are needed to protect the shared data in an unsecured channel. The present work focus on combination of cryptography and steganography to secure the data while transmitting in the network. Firstly the data which is to be transmitted from sender to receiver in the network must be encrypted using the encrypted algorithm in cryptography .Secondly the encrypted data must be hidden in an image or video or an audio file with help of steganographic algorithm. Thirdly by using decryption technique the receiver can view the original data from the hidden image or video or audio file. Transmitting data or document can be done through these ways will be secured. In this paper we implemented three encrypt techniques like DES, AES and RSA algorithm along with steganographic algorithm like LSB substitution technique and compared their performance of encrypt techniques based on the analysis of its stimulated time at the time of encryption and decryption process and also its buffer size experimentally. The entire process has done in C#.", "title": "" } ]
[ { "docid": "544feea3dbdbd764cd2bba60ac1c9c93", "text": "Scholars in many disciplines have considered the antecedents and consequences of various forms of trust. This paper generates 11 propositions exploring the relationship between Human Resource Information Systems (HRIS) and the trust an individual places in the inanimate technology (technology trust) and models the effect of those relationships on HRIS implementation success. Specifically, organizational, technological, and user factors are considered and modeled to generate a set of testable propositions that can subsequently be investigated in various organizational settings. Eleven propositions are offered suggesting that organizational trust, pooled interdependence, organizational community, organizational culture, technology adoption, technology utility, technology usability, socialization, sensitivity to privacy, and predisposition to trust influence an individual’s level of trust in the HRIS technology (technology trust) and ultimately the success of an HRIS implementation process. A summary of the relationships between the key constructs in the model and recommendations for future research are provided.", "title": "" }, { "docid": "e292d4af3c77a11e8e2013fca0c8fb04", "text": "We present in this paper experiments on Table Recognition in hand-written register books. We first explain how the problem of row and column detection is modelled, and then compare two Machine Learning approaches (Conditional Random Field and Graph Convolutional Network) for detecting these table elements. Evaluation was conducted on death records provided by the Archives of the Diocese of Passau. With an F-1 score of 89, both methods provide a quality which allows for Information Extraction. Software and dataset are open source/data.", "title": "" }, { "docid": "98b32860be2e016d20a49994de4149f1", "text": "This paper presents a method for optimizing software testing efficiency by identifying the most critical path clusters in a program. We do this by developing variable length Genetic Algorithms that optimize and select the software path clusters which are weighted in accordance with the criticality of the path. Exhaustive software testing is rarely possible because it becomes intractable for even medium sized software. Typically only parts of a program can be tested, but these parts are not necessarily the most error prone. Therefore, we are developing a more selective approach to testing by focusing on those parts that are most critical so that these paths can be tested first. By identifying the most critical paths, the testing efficiency can be increased.", "title": "" }, { "docid": "cf264a124cc9f68cf64cacb436b64fa3", "text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.", "title": "" }, { "docid": "cf43e30eab17189715b085a6e438ea7d", "text": "This paper presents our investigation of non-orthogonal multiple access (NOMA) as a novel and promising power-domain user multiplexing scheme for future radio access. Based on information theory, we can expect that NOMA with a successive interference canceller (SIC) applied to the receiver side will offer a better tradeoff between system efficiency and user fairness than orthogonal multiple access (OMA), which is widely used in 3.9 and 4G mobile communication systems. This improvement becomes especially significant when the channel conditions among the non-orthogonally multiplexed users are significantly different. Thus, NOMA can be expected to efficiently exploit the near-far effect experienced in cellular environments. In this paper, we describe the basic principle of NOMA in both the downlink and uplink and then present our proposed NOMA scheme for the scenario where the base station is equipped with multiple antennas. Simulation results show the potential system-level throughput gains of NOMA relative to OMA. key words: cellular system, non-orthogonal multiple access, superposition coding, successive interference cancellation", "title": "" }, { "docid": "5e8014d1985991e21f6f985569e6ef91", "text": "Marie Evans Schmidt and Elizabeth Vandewater review research on links between various types of electronic media and the cognitive skills of school-aged children and adolescents. One central finding of studies to date, they say, is that the content delivered by electronic media is far more influential than the media themselves. Most studies, they point out, find a small negative link between the total hours a child spends viewing TV and that child's academic achievement. But when researchers take into account characteristics of the child, such as IQ or socioeconomic status, this link typically disappears. Content appears to be crucial. Viewing educational TV is linked positively with academic achievement; viewing entertainment TV is linked negatively with achievement. When it comes to particular cognitive skills, say the authors, researchers have found that electronic media, particularly video games, can enhance visual spatial skills, such as visual tracking, mental rotation, and target localization. Gaming may also improve problem-solving skills. Researchers have yet to understand fully the issue of transfer of learning from electronic media. Studies suggest that, under some circumstances, young people are able to transfer what they learn from electronic media to other applications, but analysts are uncertain how such transfer occurs. In response to growing public concern about possible links between electronic media use and attention problems in children and adolescents, say the authors, researchers have found evidence for small positive links between heavy electronic media use and mild attention problems among young people but have found only inconsistent evidence so far for a link between attention deficit hyperactivity disorder and media use. The authors point out that although video games, interactive websites, and multimedia software programs appear to offer a variety of possible benefits for learning, there is as yet little empirical evidence to suggest that such media are more effective than other forms of instruction.", "title": "" }, { "docid": "9096c5bfe44df6dc32641b8f5370d8d0", "text": "This paper presents a nonintrusive prototype computer vision system for monitoring a driver's vigilance in real time. It is based on a hardware system for the real-time acquisition of a driver's images using an active IR illuminator and the software implementation for monitoring some visual behaviors that characterize a driver's level of vigilance. Six parameters are calculated: Percent eye closure (PERCLOS), eye closure duration, blink frequency, nodding frequency, face position, and fixed gaze. These parameters are combined using a fuzzy classifier to infer the level of inattentiveness of the driver. The use of multiple visual parameters and the fusion of these parameters yield a more robust and accurate inattention characterization than by using a single parameter. The system has been tested with different sequences recorded in night and day driving conditions in a motorway and with different users. Some experimental results and conclusions about the performance of the system are presented", "title": "" }, { "docid": "d0778852e57dddf8a454dd609908ff87", "text": "Abstract: Trivariate barycentric coordinates can be used both to express a point inside a tetrahedron as a convex combination of the four vertices and to linearly interpolate data given at the vertices. In this paper we generalize these coordinates to convex polyhedra and the kernels of star-shaped polyhedra. These coordinates generalize in a natural way a recently constructed set of coordinates for planar polygons, called mean value coordinates.", "title": "" }, { "docid": "93ec9adabca7fac208a68d277040c254", "text": "UNLABELLED\nWe developed cyNeo4j, a Cytoscape App to link Cytoscape and Neo4j databases to utilize the performance and storage capacities Neo4j offers. We implemented a Neo4j NetworkAnalyzer, ForceAtlas2 layout and Cypher component to demonstrate the possibilities a distributed setup of Cytoscape and Neo4j have.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe app is available from the Cytoscape App Store at http://apps.cytoscape.org/apps/cyneo4j, the Neo4j plugins at www.github.com/gsummer/cyneo4j-parent and the community and commercial editions of Neo4j can be found at http://www.neo4j.com.\n\n\nCONTACT\ngeorg.summer@gmail.com.", "title": "" }, { "docid": "d3cc065dd9212cc351662c51bd5f2284", "text": "Human activities comprise several sub-activities performed in a sequence and involve interactions with various objects. This makes reasoning about the object affordances a central task for activity recognition. In this work, we consider the problem of jointly labeling the object affordances and human activities from RGB-D videos. We frame the problem as a Markov Random Field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural SVM approach, where labeling over various alternate temporal segmentations are considered as latent variables. We tested our method on a dataset comprising 120 activity videos collected from four subjects, and obtained an end-to-end precision of 81.8% and recall of 80.0% for labeling the activities.", "title": "" }, { "docid": "90cafc449ebe112a022715f7b6845ba9", "text": "Deep neural nets have caused a revolution in many classification tasks. A related ongoing revolution—also theoretically not understood—concerns their ability to serve as generative models for complicated types of data such as images and texts. These models are trained using ideas like variational autoencoders and Generative Adversarial Networks. We take a first cut at explaining the expressivity of multilayer nets by giving a sufficient criterion for a function to be approximable by a neural network with n hidden layers. A key ingredient is Barron’s Theorem [Bar93], which gives a Fourier criterion for approximability of a function by a neural network with 1 hidden layer. We show that a composition of n functions which satisfy certain Fourier conditions (“Barron functions”) can be approximated by a n+ 1layer neural network. For probability distributions, this translates into a criterion for a probability distribution to be approximable in Wasserstein distance—a natural metric on probability distributions—by a neural network applied to a fixed base distribution (e.g., multivariate gaussian). Building up recent lower bound work, we also give an example function that shows that composition of Barron functions is more expressive than Barron functions alone.", "title": "" }, { "docid": "23159d5a2ddda7d83ea4befa808f1af4", "text": "We investigate potential benefits of employing Design Structure Matrix (DSM) in the context of Model-Based Systems Engineering (MBSE) for the purposes of analyzing and improving the design of a product-project ensemble. Focusing on process DSM, we present an algorithm for bidirectional transformation frame between a product-project system model and its corresponding Model-Based DSM (MDSM). Using Object-Process Methodology (OPM) as the underlying modeling language, we examine and characterize useful and insightful relationships between the system model and its MDSM. An unmanned aerial vehicle case study demonstrates the semantics of and analogy between various types of relationships as they are reflected in both the OPM system model and the MDSM derived from it. Finally, we conclude with further research direction on showing how clustering of DSM processes can be reflected back as an improvement of the OPM model.", "title": "" }, { "docid": "31512e01cebd226da8db288ecf6869c5", "text": "In recent years, deep learning has shown performance breakthroughs in many applications, such as image detection, image segmentation, pose estimation, and speech recognition. It was also applied successfully to malware detection. However, this comes with a major concern: deep networks have been found to be vulnerable to adversarial examples. So far successful attacks have been proved to be very effective especially in the domains of images and speech, where small perturbations to the input signal do not change how it is perceived by humans but greatly affect the classification of the model under attack. Our goal is to modify a malicious binary so it would be detected as benign while preserving its original functionality. In contrast to images or speech, small modifications to bytes of the binary lead to significant changes in the functionality. We introduce a novel approach to generating adversarial example for attacking a whole-binary malware detector. We append to the binary file a small section, which contains a selected sequence of bytes that steers the prediction of the network from malicious to be benign with high confidence. We applied this approach to a CNNbased malware detection model and showed extremely high rates of success in the attack.", "title": "" }, { "docid": "372fa95863cf20fdcb632d033cb4d944", "text": "Traditional approaches for color propagation in videos rely on some form of matching between consecutive video frames. Using appearance descriptors, colors are then propagated both spatially and temporally. These methods, however, are computationally expensive and do not take advantage of semantic information of the scene. In this work we propose a deep learning framework for color propagation that combines a local strategy, to propagate colors frame-by-frame ensuring temporal stability, and a global strategy, using semantics for color propagation within a longer range. Our evaluation shows the superiority of our strategy over existing video and image color propagation methods as well as neural photo-realistic style transfer approaches.", "title": "" }, { "docid": "e4e97569f53ddde763f4f28559c96ba6", "text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.", "title": "" }, { "docid": "456d376029d594170c81dbe455a4086a", "text": "Long range, low power networks are rapidly gaining acceptance in the Internet of Things (IoT) due to their ability to economically support long-range sensing and control applications while providing multi-year battery life. LoRa is a key example of this new class of network and is being deployed at large scale in several countries worldwide. As these networks move out of the lab and into the real world, they expose a large cyber-physical attack surface. Securing these networks is therefore both critical and urgent. This paper highlights security issues in LoRa and LoRaWAN that arise due to the choice of a robust but slow modulation type in the protocol. We exploit these issues to develop a suite of practical attacks based around selective jamming. These attacks are conducted and evaluated using commodity hardware. The paper concludes by suggesting a range of countermeasures that can be used to mitigate the attacks.", "title": "" }, { "docid": "c2e92f8289ebf50ca363840133dc2a43", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.08.042 ⇑ Address: WOLNM & ESIME Zacatenco, Instituto Politécnico Nacional, U. Profesional Adolfo López Mateos, Edificio Z-4, 2do piso, cubiculo 6, Miguel Othón de Mendizábal S/N, La Escalera, Gustavo A. Madero, D.F., C.P. 07320, Mexico. Tel.: +52 55 5694 0916/+52 55 5454 2611 (cellular); fax: +52 55 5694 0916. E-mail address: apenaa@ipn.mx URL: http://www.wolnm.org/apa 1 AIWBES: adaptive and intelligent web-based educational systems; BKT: Bayesian knowledge tracing; CBES: computer-based educational systems; CBIS: computerbased information system,; DM: data mining; DP: dynamic programming; EDM: educational data mining; EM: expectation maximization; HMM: hidden Markov model; IBL: instances-based learning; IRT: item response theory; ITS: intelligent tutoring systems; KDD: knowledge discovery in databases; KT: knowledge tracing; LMS: learning management systems; SNA: social network analysis; SWOT: strengths, weakness, opportunities, and threats; WBC: web-based courses; WBES: web-based educational systems. Alejandro Peña-Ayala ⇑", "title": "" }, { "docid": "a92aa1ea6faf19a2257dce1dda9cd0d0", "text": "This paper introduces a novel content-adaptive image downscaling method. The key idea is to optimize the shape and locations of the downsampling kernels to better align with local image features. Our content-adaptive kernels are formed as a bilateral combination of two Gaussian kernels defined over space and color, respectively. This yields a continuum ranging from smoothing to edge/detail preserving kernels driven by image content. We optimize these kernels to represent the input image well, by finding an output image from which the input can be well reconstructed. This is technically realized as an iterative maximum-likelihood optimization using a constrained variation of the Expectation-Maximization algorithm. In comparison to previous downscaling algorithms, our results remain crisper without suffering from ringing artifacts. Besides natural images, our algorithm is also effective for creating pixel art images from vector graphics inputs, due to its ability to keep linear features sharp and connected.", "title": "" }, { "docid": "2b8ca8be8d5e468d4cd285ecc726eceb", "text": "These days, large-scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large-scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time-consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel-like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large-scale real-world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "69871f7730ce78129cb07b029151de48", "text": "Biological signal processing offers an alternative to improve life quality in handicapped patients. In this sense is possible, to control devices as wheel chairs or computer systems. The signals that are usually used are EMG, EOG and EEG. When the lost of ability is severe the use of EMG signals is not possible because the person had lost, as in the case of ALS patients, the ability to control his body. EOG offers low resolution because the technique depends of many external and uncontrollable variables of the environment. This work shows the design of a set of algorithms capable to classify brain signals related to imaginary motor activities ( left and right hand imaginary). First, digital signal processing is used to select and extract discriminant features, using parametrical methods for the estimation of the power spectral density and the Fisher criterion for separability. The signal is then classified, using linear discriminant analysis. The results show that is possible to obtain good performance with error rates as low as 13% and that the use of parametrical methods for Spectral Power Density estimation can improve the accuracy of the Brain Computer Interface.", "title": "" } ]
scidocsrr
8dbeb1c275a094146b26ea1ab3e314cc
Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling
[ { "docid": "93f1ee5523f738ab861bcce86d4fc906", "text": "Semantic role labeling (SRL) is one of the basic natural language processing (NLP) problems. To this date, most of the successful SRL systems were built on top of some form of parsing results (Koomen et al., 2005; Palmer et al., 2010; Pradhan et al., 2013), where pre-defined feature templates over the syntactic structure are used. The attempts of building an end-to-end SRL learning system without using parsing were less successful (Collobert et al., 2011). In this work, we propose to use deep bi-directional recurrent network as an end-to-end system for SRL. We take only original text information as input feature, without using any syntactic knowledge. The proposed algorithm for semantic role labeling was mainly evaluated on CoNLL-2005 shared task and achieved F1 score of 81.07. This result outperforms the previous state-of-the-art system from the combination of different parsing trees or models. We also obtained the same conclusion with F1 = 81.27 on CoNLL2012 shared task. As a result of simplicity, our model is also computationally efficient that the parsing speed is 6.7k tokens per second. Our analysis shows that our model is better at handling longer sentences than traditional models. And the latent variables of our model implicitly capture the syntactic structure of a sentence.", "title": "" }, { "docid": "b10447097f8d513795b4f4e08e1838d8", "text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.", "title": "" }, { "docid": "60d21d395c472eb36bdfd014c53d918a", "text": "We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.", "title": "" } ]
[ { "docid": "d0f50fa4ef375759dcac7270b006f147", "text": "Automatic separation of signatures from a document page involves difficult challenges due to the free-flow nature of handwriting, overlapping/touching of signature parts with printed text, noise, etc. In this paper, we have proposed a novel approach for the segmentation of signatures from machine printed signed documents. The algorithm first locates the signature block in the document using word level feature extraction. Next, the signature strokes that touch or overlap with the printed texts are separated. A stroke level classification is then performed using skeleton analysis to separate the overlapping strokes of printed text from the signature. Gradient based features and Support Vector Machine (SVM) are used in our scheme. Finally, a Conditional Random Field (CRF) model energy minimization concept based on approximated labeling by graph cut is applied to label the strokes as \"signature\" or \"printed text\" for accurate segmentation of signatures. Signature segmentation experiment is performed in \"tobacco\" dataset1 and we have obtained encouraging results.", "title": "" }, { "docid": "81c02e708a21532d972aca0b0afd8bb5", "text": "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.", "title": "" }, { "docid": "bb8115f8c172e22bd0ff70bd079dfa98", "text": "This paper reports on the second generation of the Pleated Pneumatic Artificial Muscle (PPAM) which has been developed to extend the life span of its first prototype. This type of artificial was developed to overcome dry friction and material deformation which is present in the widely used McKibben type of artificial muscle. The essence of the PPAM is its pleated membrane structure which enables the muscle to work at low pressures and at large contractions. There is a growing interest in this kind of actuation for robotics applications due to its high power to weight ratio and the adaptable compliance, especially for legged locomotion and robot applications in direct contact with a human. This paper describes the design of the second generation PPAM, for which specifically the membrane layout has been changed. In function of this new layout the mathematical model, developed for the first prototype, has been reformulated. This paper gives an elaborate discussion on this mathematical model which represents the force generation and enclosed muscle volume. Static load tests on some real muscles, which have been carried out in order to validate the mathematical model, are then discussed. Furthermore are given two robotic applications which currently use these pneumatic artificial muscles. One is the biped Lucy and the another one is a manipulator application which works in direct contact with an operator.", "title": "" }, { "docid": "5562bb6fdc8864a23e7ec7992c7bb023", "text": "Bacteria are known to communicate primarily via secreted extracellular factors. Here we identify a previously uncharacterized type of bacterial communication mediated by nanotubes that bridge neighboring cells. Using Bacillus subtilis as a model organism, we visualized transfer of cytoplasmic fluorescent molecules between adjacent cells. Additionally, by coculturing strains harboring different antibiotic resistance genes, we demonstrated that molecular exchange enables cells to transiently acquire nonhereditary resistance. Furthermore, nonconjugative plasmids could be transferred from one cell to another, thereby conferring hereditary features to recipient cells. Electron microscopy revealed the existence of variously sized tubular extensions bridging neighboring cells, serving as a route for exchange of intracellular molecules. These nanotubes also formed in an interspecies manner, between B. subtilis and Staphylococcus aureus, and even between B. subtilis and the evolutionary distant bacterium Escherichia coli. We propose that nanotubes represent a major form of bacterial communication in nature, providing a network for exchange of cellular molecules within and between species.", "title": "" }, { "docid": "a9d93cb2c0d6d76a8597bcd64ecd00ba", "text": "Hospital-based nurses (N = 832) and doctors (N = 603) in northern and eastern Spain completed a survey of job burnout, areas of work life, and management issues. Analysis of the results provides support for a mediation model of burnout that depicts employees’ energy, involvement, and efficacy as intermediary experiences between their experiences of work life and their evaluations of organizational change. The key element of this model is its focus on employees’ capacity to influence their work environments toward greater conformity with their core values. The model considers 3 aspects of that capacity: decision-making participation, organizational justice, and supervisory relationships. The analysis supports this model and emphasizes a central role for first-line supervisors in employees’ experiences of work life.jasp_563 57..75", "title": "" }, { "docid": "20e09739910e5f3e7e721937b3464b6c", "text": "The Andes system demonstrates that student learning can be significantly increased by upgrading only their homework problem-solving support. Although Andes is called an intelligent tutoring system, it actually replaces only the students' pencil and paper as they do problem-solving homework. Students do the same problems as before, study the same textbook, and attend the same lectures, labs and recitations. Five years of experimentation at the United States Naval Academy indicates that Andes significantly improves student learning. Andes' key feature appears to be the grain-size of interaction. Whereas most tutoring systems have students enter only the answer to a problem, Andes has students enter a whole derivation, which may consist of many steps, such as drawing vectors, drawing coordinate systems, defining variables and writing equations. Andes gives feedback after each step. When the student asks for help in the middle of problem-solving, Andes gives hints on what's wrong with an incorrect step or on what kind of step to do next. Thus, the grain size of Andes' interaction is a single step in solving the problem, whereas the grain size of a typical tutoring system's interaction is the answer to the problem. This report is a comprehensive description of Andes. It describes Andes' pedagogical principles and features, the system design and implementation, the evaluations of pedagogical effectiveness, and our plans for dissemination.", "title": "" }, { "docid": "341e3832bf751688a9deabdfb5687f69", "text": "The NINCDS-ADRDA and the DSM-IV-TR criteria for Alzheimer's disease (AD) are the prevailing diagnostic standards in research; however, they have now fallen behind the unprecedented growth of scientific knowledge. Distinctive and reliable biomarkers of AD are now available through structural MRI, molecular neuroimaging with PET, and cerebrospinal fluid analyses. This progress provides the impetus for our proposal of revised diagnostic criteria for AD. Our framework was developed to capture both the earliest stages, before full-blown dementia, as well as the full spectrum of the illness. These new criteria are centred on a clinical core of early and significant episodic memory impairment. They stipulate that there must also be at least one or more abnormal biomarkers among structural neuroimaging with MRI, molecular neuroimaging with PET, and cerebrospinal fluid analysis of amyloid beta or tau proteins. The timeliness of these criteria is highlighted by the many drugs in development that are directed at changing pathogenesis, particularly at the production and clearance of amyloid beta as well as at the hyperphosphorylation state of tau. Validation studies in existing and prospective cohorts are needed to advance these criteria and optimise their sensitivity, specificity, and accuracy.", "title": "" }, { "docid": "4af5aa24efc82a8e66deb98f224cd033", "text": "Abstract—In the recent years, the rapid spread of mobile device has create the vast amount of mobile data. However, some shallow-structure models such as support vector machine (SVM) have difficulty dealing with high dimensional data with the development of mobile network. In this paper, we analyze mobile data to predict human trajectories in order to understand human mobility pattern via a deep-structure model called “DeepSpace”. To the best of out knowledge, it is the first time that the deep learning approach is applied to predicting human trajectories. Furthermore, we develop the vanilla convolutional neural network (CNN) to be an online learning system, which can deal with the continuous mobile data stream. In general, “DeepSpace” consists of two different prediction models corresponding to different scales in space (the coarse prediction model and fine prediction models). This two models constitute a hierarchical structure, which enable the whole architecture to be run in parallel. Finally, we test our model based on the data usage detail records (UDRs) from the mobile cellular network in a city of southeastern China, instead of the call detail records (CDRs) which are widely used by others as usual. The experiment results show that “DeepSpace” is promising in human trajectories prediction.", "title": "" }, { "docid": "6ba2aed7930d4c7fee807a0f4904ddc5", "text": "This work is released in biometric field and has as goal, development of a full automatic fingerprint identification system based on support vector machine. Promising Results of first experiences pushed us to develop codification and recognition algorithms which are specifically associated to this system. In this context, works were consecrated on algorithm developing of the original image processing, minutiae and singular points localization; Gabor filters coding and testing these algorithms on well known databases which are: FVC2004 databases & FingerCell database. Performance Evaluating has proved that SVM achieved a good recognition rate in comparing with results obtained using a classic neural network RBF. Keywords—Biometry, Core and Delta points Detection, Gabor filters coding, Image processing and Support vector machine.", "title": "" }, { "docid": "54ceed51f750eadda3038b42eb9977a5", "text": "Starting from the revolutionary Retinex by Land and McCann, several further perceptually inspired color correction models have been developed with different aims, e.g. reproduction of color sensation, robust features recognition, enhancement of color images. Such models have a differential, spatially-variant and non-linear nature and they can coarsely be distinguished between white-patch (WP) and gray-world (GW) algorithms. In this paper we show that the combination of a pure WP algorithm (RSR: random spray Retinex) and an essentially GW one (ACE) leads to a more robust and better performing model (RACE). The choice of RSR and ACE follows from the recent identification of a unified spatially-variant approach for both algorithms. Mathematically, the originally distinct non-linear and differential mechanisms of RSR and ACE have been fused using the spray technique and local average operations. The investigation of RACE allowed us to put in evidence a common drawback of differential models: corruption of uniform image areas. To overcome this intrinsic defect, we devised a local and global contrast-based and image-driven regulation mechanism that has a general applicability to perceptually inspired color correction algorithms. Tests, comparisons and discussions are presented.", "title": "" }, { "docid": "1c63438d58ef3817ce9b637bddc57fc1", "text": "Object recognition strategies are increasingly based on regional descriptors such as SIFT or HOG at a sparse set of points or on a dense grid of points. Despite their success on databases such as PASCAL and CALTECH, the capability of such a representation in capturing the essential object content of the image is not well-understood: How large is the equivalence class of images sharing the same HOG descriptor? Are all these images from the same object category, and if not, do the non-category images resemble random images which cannot generically arise from imaged scenes? How frequently do images from two categories share the same HOG-based representation? These questions are increasingly more relevant as very large databases such as ImageNet and LabelMe are being developed where the current object recognition strategies show limited success. We examine these questions by introducing the metameric class of moments of HOG which allows for a target image to be morphed into an impostor image sharing the HOG representation of a source image while retaining the initial visual appearance. We report that two distinct images can be made to share the same HOG representation when the overlap between HOG patches is minimal, and the success of this method falls with increasing overlap. This paper is therefore a step in the direction of developing a sampling theorem for representing images by HOG features.", "title": "" }, { "docid": "2a4360b7031aa9c191a81b1b14307db9", "text": "Wireless body area network (BAN) is a promising technology for real-time monitoring of physiological signals to support medical applications. In order to ensure the trustworthy and reliable gathering of patient's critical health information, it is essential to provide node authentication service in a BAN, which prevents an attacker from impersonation and false data/command injection. Although quite fundamental, the authentication in BAN still remains a challenging issue. On one hand, traditional authentication solutions depend on prior trust among nodes whose establishment would require either key pre-distribution or non-intuitive participation by inexperienced users, while they are vulnerable to key compromise. On the other hand, most existing non-cryptographic authentication schemes require advanced hardware capabilities or significant modifications to the system software, which are impractical for BANs.\n In this paper, for the first time, we propose a lightweight body area network authentication scheme (BANA) that does not depend on prior-trust among the nodes and can be efficiently realized on commercial off-the-shelf low-end sensor devices. This is achieved by exploiting physical layer characteristics unique to a BAN, namely, the distinct received signal strength (RSS) variation behaviors between an on-body communication channel and an off-body channel. Our main finding is that the latter is more unpredictable over time, especially under various body motion scenarios. This unique channel characteristic naturally arises from the multi-path environment surrounding a BAN, and cannot be easily forged by attackers. We then adopt clustering analysis to differentiate the signals from an attacker and a legitimate node. The effectiveness of BANA is validated through extensive real-world experiments under various scenarios. It is shown that BANA can accurately identify multiple attackers with minimal amount of overhead.", "title": "" }, { "docid": "acbac38a7de49bf1b6ad15abb007b601", "text": "Our everyday environments are gradually becoming intelligent, facilitated both by technological development and user activities. Although large-scale intelligent environments are still rare in actual everyday use, they have been studied for quite a long time, and several user studies have been carried out. In this paper, we present a user-centric view of intelligent environments based on published research results and our own experiences from user studies with concepts and prototypes. We analyze user acceptance and users’ expectations that affect users’ willingness to start using intelligent environments and to continue using them. We discuss user experience of interacting with intelligent environments where physical and virtual elements are intertwined. Finally, we touch on the role of users in shaping their own intelligent environments instead of just using ready-made environments. People are not merely “using” the intelligent environments but they live in them, and they experience the environments via embedded services and new interaction tools as well as the physical and social environment. Intelligent environments should provide emotional as well as instrumental value to the people who live in them, and the environments should be trustworthy and controllable both by regular users and occasional visitors. Understanding user expectations and user experience in intelligent environments, OPEN ACCESS", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "bf5f3aedb8eadc7c9b12b6d670f93c49", "text": "Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in of acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.", "title": "" }, { "docid": "f8005e53658743d70abdf8f6dcb78819", "text": "We present a novel approach to visually locate bodies of research within the sciences, both at each moment of time and dynamically. This article describes how this approach fits with other efforts to locally and globally map scientific outputs. We then show how these science overlay maps help benchmark, explore collaborations, and track temporal changes, using examples of universities, corporations, funding agencies, and research topics. We address their conditions of application and discuss advantages, downsides and limitations. Overlay maps especially help investigate the increasing number of scientific developments and organisations that do not fit within traditional disciplinary categories. We make these tools available (at the Internet) to enable researchers to explore the ongoing socio-cognitive transformations of science and technology systems.", "title": "" }, { "docid": "bf05dca7c0ac521045794c90c91eba9d", "text": "The optimization and analysis of new waveguide polarizers have been carried out on the basis of rigorous full-wave model. These polarizers transform the dominant mode of input rectangular waveguide into an elliptically polarized wave of output square waveguide. The phase-shifting module is realized on the basis of one or two sections of a square waveguide having two diagonally placed square ridges. It has been found out that polarizers with single-section phase shifter can provide the bandwidth from 11% to 15% at the axial ratio level of r < 2 dB and the return loss level of LR > 20 dB, whereas the two-section ones have the bandwidths more 23% at r < 1 dB and LR > 23 dB", "title": "" }, { "docid": "8d957e6c626855a06ac2256c4e7cd15c", "text": "This article presents a robotic dataset collected from the largest underground copper mine in the world. The sensor measurements from a 3D scanning lidar, a 2D radar, and stereo cameras were recorded from an approximately two kilometer traverse of a production-active tunnel. The equipment used and the data collection process is discussed in detail, along with the format of the data. This dataset is suitable for research in robotic navigation, as well as simultaneous localization and mapping. The download instructions are available at the following website http://dataset.amtc.cl.", "title": "" }, { "docid": "188e52b27ae465c4785f0e1811c3014a", "text": "High-voltage p-channel 4H-SiC insulated gate bipolar transistors (IGBTs) have been fabricated and characterized. The devices have a forward voltage drop of 7.2 V at 100 A/cm2 and a -16 V gate bias at 25degC, corresponding to a specific on-resistance of 72 mOmega ldr cm2 and a differential on-resistance of 26 mmOmega ldr cm2. Hole mobility of 12 cm2/V ldr s in the inversion channel with a threshold voltage of -6 V was achieved by optimizing the n+ well doping profile and gate oxidation process. A novel current enhancement layer was adopted to reduce the JFET resistance and enhance conductivity modulation by improving hole current spreading and suppressing the electron current conduction through the top n-p-n transistor. Inductive switching results have shown that the p-IGBT exhibited a turn-off time of ~1 mus and a turn-off energy loss of 12 m J at 4-kV dc-link voltage and 6-A load current at 25degC. The turn-off trajectory from the measured inductive load switching waveforms and numerical simulations shows that the p-IGBT had a near-square reverse bias safe operating area. Numerical simulations have been conducted to achieve an improved tradeoff between forward voltage drop and switching off energy by investigating the effects of drift layer lifetime and p-buffer layer parameters. The advantages of SiC p-IGBTs, such as the potential of very low ON-state resistance, slightly positive temperature coefficient, high switching speed, small switching losses, and large safe operating area, make them suitable and attractive for high-power high-frequency applications.", "title": "" } ]
scidocsrr
6e4ff63141e03d0563b71fef3d6a646a
2D/3D image registration using regression learning
[ { "docid": "6e4bb5d16c72c8dc706f934fa3558adb", "text": "This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in Dupuis et al. (1998) and Trouvé (1995) in which two images I 0, I 1 are given and connected via the diffeomorphic change of coordinates I 0○ϕ−1=I 1 where ϕ=Φ1 is the end point at t= 1 of curve Φ t , t∈[0, 1] satisfying .Φ t =v t (Φ t ), t∈ [0,1] with Φ0=id. The variational problem takes the form $$\\mathop {\\arg {\\text{m}}in}\\limits_{\\upsilon :\\dot \\phi _t = \\upsilon _t \\left( {\\dot \\phi } \\right)} \\left( {\\int_0^1 {\\left\\| {\\upsilon _t } \\right\\|} ^2 {\\text{d}}t + \\left\\| {I_0 \\circ \\phi _1^{ - 1} - I_1 } \\right\\|_{L^2 }^2 } \\right),$$ where ‖v t‖ V is an appropriate Sobolev norm on the velocity field v t(·), and the second term enforces matching of the images with ‖·‖L 2 representing the squared-error norm. In this paper we derive the Euler-Lagrange equations characterizing the minimizing vector fields v t, t∈[0, 1] assuming sufficient smoothness of the norm to guarantee existence of solutions in the space of diffeomorphisms. We describe the implementation of the Euler equations using semi-lagrangian method of computing particle flows and show the solutions for various examples. As well, we compute the metric distance on several anatomical configurations as measured by ∫0 1‖v t‖ V dt on the geodesic shortest paths.", "title": "" } ]
[ { "docid": "04d66f58cea190d7d7ec8654b6c81d3b", "text": "Lymphedema is a chronic, progressive condition caused by an imbalance of lymphatic flow. Upper extremity lymphedema has been reported in 16-40% of breast cancer patients following axillary lymph node dissection. Furthermore, lymphedema following sentinel lymph node biopsy alone has been reported in 3.5% of patients. While the disease process is not new, there has been significant progress in the surgical care of lymphedema that can offer alternatives and improvements in management. The purpose of this review is to provide a comprehensive update and overview of the current advances and surgical treatment options for upper extremity lymphedema.", "title": "" }, { "docid": "25e6f4b6c86fac766c09aae302ec9516", "text": "ABSTRACT. The purpose of this study is to construct doctors’ acceptance model of Electronic Medical Records (EMR) in private hospitals. The model extends the Technology Acceptance Model (TAM) with two factors of Individual Capabilities; Self-Efficacy (SE) and Perceived Behavioral Control (PBC). The initial findings proposes additional factors over the original factors in TAM making Perceived Usefulness (PU), Perceived Ease Of Use (PEOU), Behavioral Intention to use (BI), SE, and PBC working in incorporation. A cross-sectional survey was used in which data were gathered by a personal administered questionnaire as the instrument for data collection. Doctors of public hospitals were involved in this study which proves that all factors are reliable.", "title": "" }, { "docid": "8946acc84c07e1163aadc04cf25f4840", "text": "Leisure travelers increasingly prefer to book hotel online when considering the convenience and cost/ time saving. This research examines the direct and mediating effects of brand image, perceived price, trust, perceived value on consumers' booking intentions and compares the gender differences in online hotel booking. The outcomes confirm most of the direct and indirect path effects and are consistent with findings from previous studies. Consumers in Taiwan tend to believe the hotel price is affordable, the hotel brand is attractive, the hotel is trustworthy, the hotel will offer good value for the price and the likelihood of their booking intentions is high. Brand image, perceived price, and perceived value are the three critical determinants directly influencing purchase intentions. However, the impact of trust on purchase intentions is not significant. The differences between males and females on purchase intentions are not significant as well. Managerial implications of these results are discussed. © 2015 College of Management, National Cheng Kung University. Production and hosting by Elsevier Taiwan LLC. All rights reserved.", "title": "" }, { "docid": "822e9be6fa3440640d4b3153ed5e1678", "text": "Knowledge tracing serves as a keystone in delivering personalized education. However, few works attempted to model students’ knowledge state in the setting of Second Language Acquisition. The Duolingo Shared Task on Second Language Acquisition Modeling (Settles et al., 2018) provides students’ trace data that we extensively analyze and engineer features from for the task of predicting whether a student will correctly solve a vocabulary exercise. Our analyses of students’ learning traces reveal that factors like exercise format and engagement impact their exercise performance to a large extent. Overall, we extracted 23 different features as input to a Gradient Tree Boosting framework, which resulted in an AUC score of between 0.80 and 0.82 on the official test set.", "title": "" }, { "docid": "b33ad8cd7ac6a58a9aef00beb01fc621", "text": "We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to compare their contribution when combined with other advanced techniques. We show performance of several well-known types of language models, with the best results achieved with a recurrent neural network based language model. The baseline unpruned KneserNey 5-gram model achieves perplexity 67.6. A combination of techniques leads to 35% reduction in perplexity, or 10% reduction in cross-entropy (bits), over that baseline. The benchmark is available as a code.google.com project; besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the baseline n-gram models.", "title": "" }, { "docid": "ce384939966654196aabbb076326c779", "text": "We address the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions. As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative. However, many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations. Here we focus on adversarial domain adaptation, deriving important findings about when it performs well and what properties of the domains are important in this regard. Our experiments with StackExchange data show an average improvement of 5.6% over the best baseline across multiple pairs of domains.", "title": "" }, { "docid": "09085fc15308a96cd9441bb0e23e6c1a", "text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.", "title": "" }, { "docid": "dd40063dd10027f827a65976261c8683", "text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.", "title": "" }, { "docid": "59e4be24f48ff2e097ecd92987d51619", "text": "Learning the representations of a knowledge graph has attracted significant research interest in the field of intelligent Web. By regarding each relation as one translation from head entity to tail entity, translation-based methods including TransE, TransH and TransR are simple, effective and achieving the state-of-the-art performance. However, they still suffer the following issues: (i) low performance when modeling 1-to-N, N-to-1 and Nto-N relations. (ii) limited performance due to the structure sparseness of the knowledge graph. In this paper, we propose a novel knowledge graph representation learning method by taking advantage of the rich context information in a text corpus. The rich textual context information is incorporated to expand the semantic structure of the knowledge graph and each relation is enabled to own different representations for different head and tail entities to better handle 1-to-N, N-to-1 and N-to-N relations. Experiments on multiple benchmark datasets show that our proposed method successfully addresses the above issues and significantly outperforms the state-of-the-art methods.", "title": "" }, { "docid": "47866c8eb518f962213e3a2d8c3ab8d3", "text": "With the increasing fears of the impacts of the high penetration rates of Photovoltaic (PV) systems, a technical study about their effects on the power quality metrics of the utility grid is required. Since such study requires a complete modeling of the PV system in an electromagnetic transient software environment, PSCAD was chosen. This paper investigates a grid-tied PV system that is prepared in PSCAD. The model consists of PV array, DC link capacitor, DC-DC buck converter, three phase six-pulse inverter, AC inductive filter, transformer and a utility grid equivalent model. The paper starts with investigating the tasks of the different blocks of the grid-tied PV system model. It also investigates the effect of variable atmospheric conditions (irradiation and temperature) on the performance of the different components in the model. DC-DC converter and inverter in this model use PWM and SPWM switching techniques, respectively. Finally, total harmonic distortion (THD) analysis on the inverter output current at PCC will be applied and the obtained THD values will be compared with the limits specified by the regulating standards such as IEEE Std 519-1992.", "title": "" }, { "docid": "edea2ca381ac3115a1c2218425ff9b55", "text": "Reconfigurable hardware is by far the most dominant implementation platform in terms of the number of designs per year. During the past decade, security has emerged as a premier design metrics with an ever increasing scope. Our objective is to identify and survey the most important issues related to FPGA security. Instead of insisting on comprehensiveness, we focus on a number of techniques that have the highest potential for conceptual breakthroughs or for the practical widespread adoption. Our emphasis is on security primitives (PUFs and TRNGs), analysis of potential vulnerabilities of FPGA synthesis flow, digital rights management, and FPGA-based applied algorithmic cryptography. We also discuss the most popular and a selection of recent research directions related to FPGA-based security platforms. Specifically, we identify and discuss a number of classical and emerging exciting FPGA-based security research and development directions.", "title": "" }, { "docid": "ddef188a971d53c01d242bb9198eac10", "text": "State-of-the-art slot filling models for goal-oriented human/machine conversational language understanding systems rely on deep learning methods. While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems. This paper proposes a deep learning based approach that can utilize only the slot description in context without the need for any labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The main idea of this paper is to leverage the encoding of the slot names and descriptions within a multi-task deep learned slot filling model, to implicitly align slots across domains. The proposed approach is promising for solving the domain scaling problem and eliminating the need for any manually annotated data or explicit schema alignment. Furthermore, our experiments on multiple domains show that this approach results in significantly better slot-filling performance when compared to using only in-domain data, especially in the low data regime.", "title": "" }, { "docid": "58f1ba92eb199f4d105bf262b30dbbc5", "text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.", "title": "" }, { "docid": "42fd940e239ed3748b007fde8b583b25", "text": "The ImageCLEF’s plant identification task provides a testbed for the system-oriented evaluation of plant identification, more precisely on the 126 tree species identification based on leaf images. Three types of image content are considered: Scan, Scan-like (leaf photographs with a white uniform background), and Photograph (unconstrained leaf with natural background). The main originality of this data is that it was specifically built through a citizen sciences initiative conducted by Tela Botanica, a French social network of amateur and expert botanists. This makes the task closer to the conditions of a real-world application. This overview presents more precisely the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results. With a total of eleven groups from eight countries and with a total of 30 runs submitted, involving distinct and original methods, this second year pilot task confirms Image Retrieval community interest for biodiversity and botany, and highlights further challenging studies in plant identification.", "title": "" }, { "docid": "b395aa3ae750ddfd508877c30bae3a38", "text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.", "title": "" }, { "docid": "bd1c31a42c766b3303317ab41965c31f", "text": "Machine reading comprehension (MRC) has recently attracted attention in the fields of natural language processing and machine learning. One of the problematic presumptions with current MRC technologies is that each question is assumed to be answerable by looking at a given text passage. However, to realize human-like language comprehension ability, a machine should also be able to distinguish not-answerable questions (NAQs) from answerable questions. To develop this functionality, a dataset incorporating hard-to-detect NAQs is vital; however, its manual construction would be expensive. This paper proposes a dataset creation method that alters an existing MRC dataset, the Stanford Question Answering Dataset, and describes the resulting dataset. The value of this dataset is likely to increase if each NAQ in the dataset is properly classified with the difficulty of identifying it as an NAQ. This difficulty level would allow researchers to evaluate a machine’s NAQ detection performance more precisely. Therefore, we propose a method for automatically assigning difficulty level labels, which basically measures the similarity between a question and the target text passage. Our NAQ detection experiments demonstrate that the resulting dataset, having difficulty level annotations, is valid and potentially useful in the development of advanced MRC models.", "title": "" }, { "docid": "700bbf775539aecb956563401857f09a", "text": "Un-manned aerial vehicle (UAVs) have the potential to change the landscape of wide-area wireless connectivity by bringing them to areas where connectivity was sparing or non-existent (e.g. rural areas) or has been compromised due to disasters. While Google’s Project Loon and Facebook’s Project Aquila are examples of high-altitude, long-endurance UAV-based connectivity efforts in this direction, the telecom operators (e.g. AT&T and Verizon) have been exploring low-altitude UAV-based LTE solutions for on-demand deployments. Understandably, these projects are in their early stages and face formidable challenges in their realization and deployment. The goal of this document is to expose the reader to both the challenges as well as the potential offered by these unconventional connectivity solutions. We aim to explore the endto-end design of such UAV-based connectivity networks particularly in the context of low-altitude UAV networks providing LTE connectivity. Specifically, we aim to highlight the challenges that span across multiple layers (access, core network, backhaul) in an inter-twined manner as well as the richness and complexity of the design space itself. To help interested readers navigate this complex design space towards a solution, we also articulate the overview of one such end-to-end design, namely SkyLiTE– a self-organizing network of low-altitude UAVs that provide optimized LTE connectivity in a desired region.", "title": "" }, { "docid": "39168bcf3cd49c13c86b13e89197ce7d", "text": "An unprecedented booming has been witnessed in the research area of artistic style transfer ever since Gatys et al. introduced the neural method. One of the remaining challenges is to balance a trade-off among three critical aspects—speed, flexibility, and quality: (i) the vanilla optimization-based algorithm produces impressive results for arbitrary styles, but is unsatisfyingly slow due to its iterative nature, (ii) the fast approximation methods based on feed-forward neural networks generate satisfactory artistic effects but bound to only a limited number of styles, and (iii) feature-matching methods like AdaIN achieve arbitrary style transfer in a real-time manner but at a cost of the compromised quality. We find it considerably difficult to balance the trade-off well merely using a single feed-forward step and ask, instead, whether there exists an algorithm that could adapt quickly to any style, while the adapted model maintains high efficiency and good image quality. Motivated by this idea, we propose a novel method, coined MetaStyle, which formulates the neural style transfer as a bilevel optimization problem and combines learning with only a few post-processing update steps to adapt to a fast approximation model with satisfying artistic effects, comparable to the optimization-based methods for an arbitrary style. The qualitative and quantitative analysis in the experiments demonstrates that the proposed approach achieves high-quality arbitrary artistic style transfer effectively, with a good trade-off among speed, flexibility, and quality.", "title": "" }, { "docid": "c9aa8454246e983e9aa2752bfa667f43", "text": "BACKGROUND\nADHD is diagnosed and treated more often in males than in females. Research on gender differences suggests that girls may be consistently underidentified and underdiagnosed because of differences in the expression of the disorder among boys and girls. One aim of the present study was to assess in a clinical sample of medication naïve boys and girls with ADHD, whether there were significant gender x diagnosis interactions in co-existing symptom severity and executive function (EF) impairment. The second aim was to delineate specific symptom ratings and measures of EF that were most important in distinguishing ADHD from healthy controls (HC) of the same gender.\n\n\nMETHODS\nThirty-seven females with ADHD, 43 males with ADHD, 18 HC females and 32 HC males between 8 and 17 years were included. Co-existing symptoms were assessed with self-report scales and parent ratings. EF was assessed with parent ratings of executive skills in everyday situations (BRIEF), and neuropsychological tests. The three measurement domains (co-existing symptoms, BRIEF, neuropsychological EF tests) were investigated using analysis of variance (ANOVA) and random forest classification.\n\n\nRESULTS\nANOVAs revealed only one significant diagnosis x gender interaction, with higher rates of self-reported anxiety symptoms in females with ADHD. Random forest classification indicated that co-existing symptom ratings was substantially better in distinguishing subjects with ADHD from HC in females (93% accuracy) than in males (86% accuracy). The most important distinguishing variable was self-reported anxiety in females, and parent ratings of rule breaking in males. Parent ratings of EF skills were better in distinguishing subjects with ADHD from HC in males (96% accuracy) than in females (92% accuracy). Neuropsychological EF tests had only a modest ability to categorize subjects as ADHD or HC in males (73% accuracy) and females (79% accuracy).\n\n\nCONCLUSIONS\nOur findings emphasize the combination of self-report and parent rating scales for the identification of different comorbid symptom expression in boys and girls already diagnosed with ADHD. Self-report scales may increase awareness of internalizing problems particularly salient in females with ADHD.", "title": "" }, { "docid": "11357967d7e83c45bb1a6ba3edfebac2", "text": "We report a unique MEMS magnetometer based on a disk shaped radial contour mode thin-film piezoelectric on silicon (TPoS) CMOS-compatible resonator. This is the first device of its kind that targets operation under atmospheric pressure conditions as opposed that existing Lorentz force MEMS magnetometers that depend on vacuum. We exploit the chosen vibration mode to enhance coupling to deliver a field sensitivity of 10.92 mV/T while operating at a resonant frequency of 6.27 MHz, despite of a sub-optimal mechanical quality (Q) factor of 697 under ambient conditions in air.", "title": "" } ]
scidocsrr
10a2ef7db2c68903bc4fbd07b4a600de
Online Affect Detection and Robot Behavior Adaptation for Intervention of Children With Autism
[ { "docid": "0e8e72e35393fca6f334ae2909a4cc74", "text": "High-functioning children with autism were compared with two control groups on measures of anxiety and social worries. Comparison control groups consisted of children with specific language impairment (SLI) and normally developing children. Each group consisted of 15 children between the ages of 8 and 12 years and were matched for age and gender. Children with autism were found to be most anxious on both measures. High anxiety subscale scores for the autism group were separation anxiety and obsessive-compulsive disorder. These findings are discussed within the context of theories of autism and anxiety in the general population of children. Suggestions for future research are made.", "title": "" }, { "docid": "f1ef345686548b060b70ebc972d51b47", "text": "Given the importance of implicit communication in human interactions, it would be valuable to have this capability in robotic systems wherein a robot can detect the motivations and emotions of the person it is working with. Recognizing affective states from physiological cues is an effective way of implementing implicit human–robot interaction. Several machine learning techniques have been successfully employed in affect-recognition to predict the affective state of an individual given a set of physiological features. However, a systematic comparison of the strengths and weaknesses of these methods has not yet been done. In this paper, we present a comparative study of four machine learning methods—K-Nearest Neighbor, Regression Tree (RT), Bayesian Network and Support Vector Machine (SVM) as applied to the domain of affect recognition using physiological signals. The results showed that SVM gave the best classification accuracy even though all the methods performed competitively. RT gave the next best classification accuracy and was the most space and time efficient.", "title": "" } ]
[ { "docid": "44b14f681f175027b22150c115d64c44", "text": "Video segmentation has become an important and active research area with a large diversity of proposed approaches. Graph-based methods, enabling top-performance on recent benchmarks, consist of three essential components: 1. powerful features account for object appearance and motion similarities; 2. spatio-temporal neighborhoods of pixels or superpixels (the graph edges) are modeled using a combination of those features; 3. video segmentation is formulated as a graph partitioning problem. While a wide variety of features have been explored and various graph partition algorithms have been proposed, there is surprisingly little research on how to construct a graph to obtain the best video segmentation performance. This is the focus of our paper. We propose to combine features by means of a classifier, use calibrated classifier outputs as edge weights and define the graph topology by edge selection. By learning the graph (without changes to the graph partitioning method), we improve the results of the best performing video segmentation algorithm by 6% on the challenging VSB100 benchmark, while reducing its runtime by 55%, as the learnt graph is much sparser.", "title": "" }, { "docid": "96423c77c714172e04d375b7ee1e9869", "text": "This paper presents a body-fixed-sensor-based approach to assess potential sleep apnea patients. A trial involving 15 patients at a sleep unit was undertaken. Vibration sounds were acquired from an accelerometer sensor fixed with a noninvasive mounting on the suprasternal notch of subjects resting in supine position. Respiratory, cardiac, and snoring components were extracted by means of digital signal processing techniques. Mainly, the following biomedical parameters used in new sleep apnea diagnosis strategies were calculated: heart rate, heart rate variability, sympathetic and parasympathetic activity, respiratory rate, snoring rate, pitch associated with snores, and airflow indirect quantification. These parameters were compared to those obtained by means of polysomnography and an accurate microphone. Results demonstrated the feasibility of implementing an accelerometry-based portable device as a simple and cost-effective solution for contributing to the screening of sleep apnea-hypopnea syndrome and other breathing disorders.", "title": "" }, { "docid": "2827e0d197b7f66c7f6ceb846c6aaa27", "text": "The food industry is becoming more customer-oriented and needs faster response times to deal with food scandals and incidents. Good traceability systems help to minimize the production and distribution of unsafe or poor quality products, thereby minimizing the potential for bad publicity, liability, and recalls. The current food labelling system cannot guarantee that the food is authentic, good quality and safe. Therefore, traceability is applied as a tool to assist in the assurance of food safety and quality as well as to achieve consumer confidence. This paper presents comprehensive information about traceability with regards to safety and quality in the food supply chain. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "662ec285031306816814378e6e192782", "text": "One task of heterogeneous face recognition is to match a near infrared (NIR) face image to a visible light (VIS) image. In practice, there are often a few pairwise NIR-VIS face images but it is easy to collect lots of VIS face images. Therefore, how to use these unpaired VIS images to improve the NIR-VIS recognition accuracy is an ongoing issue. This paper presents a deep TransfeR NIR-VIS heterogeneous facE recognition neTwork (TRIVET) for NIR-VIS face recognition. First, to utilize large numbers of unpaired VIS face images, we employ the deep convolutional neural network (CNN) with ordinal measures to learn discriminative models. The ordinal activation function (Max-Feature-Map) is used to select discriminative features and make the models robust and lighten. Second, we transfer these models to NIR-VIS domain by fine-tuning with two types of NIR-VIS triplet loss. The triplet loss not only reduces intra-class NIR-VIS variations but also augments the number of positive training sample pairs. It makes fine-tuning deep models on a small dataset possible. The proposed method achieves state-of-the-art recognition performance on the most challenging CASIA NIR-VIS 2.0 Face Database. It achieves a new record on rank-1 accuracy of 95.74% and verification rate of 91.03% at FAR=0.001. It cuts the error rate in comparison with the best accuracy [27] by 69%.", "title": "" }, { "docid": "74290ff01b32423087ce0025625dc445", "text": "niques is now the world champion computer program in the game of Contract Bridge. As reported in The New York Times and The Washington Post, this program—a new version of Great Game Products’ BRIDGE BARON program—won the Baron Barclay World Bridge Computer Challenge, an international competition hosted in July 1997 by the American Contract Bridge League. It is well known that the game tree search techniques used in computer programs for games such as Chess and Checkers work differently from how humans think about such games. In contrast, our new version of the BRIDGE BARON emulates the way in which a human might plan declarer play in Bridge by using an adaptation of hierarchical task network planning. This article gives an overview of the planning techniques that we have incorporated into the BRIDGE BARON and discusses what the program’s victory signifies for research on AI planning and game playing.", "title": "" }, { "docid": "e7c97ff0a949f70b79fb7d6dea057126", "text": "Most conventional document categorization methods require a large number of documents with labeled categories for training. These methods are hard to be applied in scenarios, such as scientific publications, where training data is expensive to obtain and categories could change over years and across domains. In this work, we propose UNEC, an unsupervised representation learning model that directly categories documents without the need of labeled training data. Specifically, we develop a novel cascade embedding approach. We first embed concepts, i.e., significant phrases mined from scientific publications, into continuous vectors, which capture concept semantics. Based on the concept similarity graph built from the concept embedding, we further embed concepts into a hidden category space, where the category information of concepts becomes explicit. Finally we categorize documents by jointly considering the category attribution of their concepts. Our experimental results show that UNEC significantly outperforms several strong baselines on a number of real scientific corpora, under both automatic and manual evaluation.", "title": "" }, { "docid": "51165fba0bc57e99069caca5796398c7", "text": "Reinforcement learning has achieved several successes in sequential decision problems. However, these methods require a large number of training iterations in complex environments. A standard paradigm to tackle this challenge is to extend reinforcement learning to handle function approximation with deep learning. Lack of interpretability and impossibility to introduce background knowledge limits their usability in many safety-critical real-world scenarios. In this paper, we study how to combine reinforcement learning and external knowledge. We derive a rule-based variant version of the Sarsa(λ) algorithm, which we call Sarsarb(λ), that augments data with complex knowledge and exploits similarities among states. We apply our method to a trading task from the Stock Market Environment. We show that the resulting algorithm leads to much better performance but also improves training speed compared to the Deep Qlearning (DQN) algorithm and the Deep Deterministic Policy Gradients (DDPG) algorithm.", "title": "" }, { "docid": "17ba29c670e744d6e4f9e93ceb109410", "text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.", "title": "" }, { "docid": "26cc29177040461634929eb1fa13395d", "text": "In this paper, we first characterize distributed real-time systems by the following two properties that have to be supported: best eflorl and leas2 suffering. Then, we propose a distributed real-time object model DRO which complies these properties. Based on the DRO model, we design an object oriented programming language DROL: an extension of C++ with the capa.bility of describing distributed real-time systems. The most eminent feature of DROL is that users can describe on meta level the semantics of message communications as a communication protocol with sending and receiving primitives. With this feature, we can construct a flexible distributed real-time system satisfying specifications which include timing constraints. We implement a runtime system of DROL on the ARTS kernel, and evaluate the efficiency of the prototype implementation as well as confirm the high expressive power of the language.", "title": "" }, { "docid": "13177a7395eed80a77571bd02a962bc9", "text": "Orexin-A and orexin-B are neuropeptides originally identified as endogenous ligands for two orphan G-protein-coupled receptors. Orexin neuropeptides (also known as hypocretins) are produced by a small group of neurons in the lateral hypothalamic and perifornical areas, a region classically implicated in the control of mammalian feeding behavior. Orexin neurons project throughout the central nervous system (CNS) to nuclei known to be important in the control of feeding, sleep-wakefulness, neuroendocrine homeostasis, and autonomic regulation. orexin mRNA expression is upregulated by fasting and insulin-induced hypoglycemia. C-fos expression in orexin neurons, an indicator of neuronal activation, is positively correlated with wakefulness and negatively correlated with rapid eye movement (REM) and non-REM sleep states. Intracerebroventricular administration of orexins has been shown to significantly increase food consumption, wakefulness, and locomotor activity in rodent models. Conversely, an orexin receptor antagonist inhibits food consumption. Targeted disruption of the orexin gene in mice produces a syndrome remarkably similar to human and canine narcolepsy, a sleep disorder characterized by excessive daytime sleepiness, cataplexy, and other pathological manifestations of the intrusion of REM sleep-related features into wakefulness. Furthermore, orexin knockout mice are hypophagic compared with weight and age-matched littermates, suggesting a role in modulating energy metabolism. These findings suggest that the orexin neuropeptide system plays a significant role in feeding and sleep-wakefulness regulation, possibly by coordinating the complex behavioral and physiologic responses of these complementary homeostatic functions.", "title": "" }, { "docid": "0cd42818f21ada2a8a6c2ed7a0f078fe", "text": "In perceiving objects we may synthesize conjunctions of separable features by directing attention serially to each item in turn (A. Treisman and G. Gelade, Cognitive Psychology, 1980, 12, 97136). This feature-integration theory predicts that when attention is diverted or overloaded, features may be wrongly recombined, giving rise to “illusory conjunctions.” The present paper confirms that illusory conjunctions are frequently experienced among unattended stimuli varying in color and shape, and that they occur also with size and solidity (outlined versus filled-in shapes). They are shown both in verbal recall and in simultaneous and successive matching tasks, making it unlikely that they depend on verbal labeling or on memory failure. They occur as often between stimuli differing on many features as between more similar stimuli, and spatial separation has little effect on their frequency. Each feature seems to be coded as an independent entity and to migrate, when attention is diverted, with few constraints from the other features of its source or destination.", "title": "" }, { "docid": "853ef57bfa4af5edf4ee3c8a46e4b4f4", "text": "Hidden properties of social media users, such as their ethnicity, gender, and location, are often reflected in their observed attributes, such as their first and last names. Furthermore, users who communicate with each other often have similar hidden properties. We propose an algorithm that exploits these insights to cluster the observed attributes of hundreds of millions of Twitter users. Attributes such as user names are grouped together if users with those names communicate with other similar users. We separately cluster millions of unique first names, last names, and userprovided locations. The efficacy of these clusters is then evaluated on a diverse set of classification tasks that predict hidden users properties such as ethnicity, geographic location, gender, language, and race, using only profile names and locations when appropriate. Our readily-replicable approach and publiclyreleased clusters are shown to be remarkably effective and versatile, substantially outperforming state-of-the-art approaches and human accuracy on each of the tasks studied.", "title": "" }, { "docid": "77f60100af0c9556e5345ee1b04d8171", "text": "SDNET2018 is an annotated image dataset for training, validation, and benchmarking of artificial intelligence based crack detection algorithms for concrete. SDNET2018 contains over 56,000 images of cracked and non-cracked concrete bridge decks, walls, and pavements. The dataset includes cracks as narrow as 0.06 mm and as wide as 25 mm. The dataset also includes images with a variety of obstructions, including shadows, surface roughness, scaling, edges, holes, and background debris. SDNET2018 will be useful for the continued development of concrete crack detection algorithms based on deep convolutional neural networks (DCNNs), which are a subject of continued research in the field of structural health monitoring. The authors present benchmark results for crack detection using SDNET2018 and a crack detection algorithm based on the AlexNet DCNN architecture. SDNET2018 is freely available at https://doi.org/10.15142/T3TD19.", "title": "" }, { "docid": "e8f431676ed0a85cb09a6462303a3ec7", "text": "This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public.", "title": "" }, { "docid": "e3b473dbff892af0175a73275c770f7d", "text": "Spacecraft require all manner of both digital and analog circuits. Onboard digital systems are constructed almost exclusively from field-programmable gate array (FPGA) circuits providing numerous advantages over discrete design including high integration density, high reliability, fast turn-around design cycle time, lower mass, volume, and power consumption, and lower parts acquisition and flight qualification costs. Analog and mixed-signal circuits perform tasks ranging from housekeeping to signal conditioning and processing. These circuits are painstakingly designed and built using discrete components due to a lack of options for field-programmability. FPAA (Field-Programmable Analog Array) and FPMA (Field-Programmable Mixed-signal Array) parts exist [1] but not in radiation-tolerant technology and not necessarily in an architecture optimal for the design of analog circuits for spaceflight applications. This paper outlines an architecture proposed for an FPAA fabricated in an existing commercial digital CMOS process used to make radiation-tolerant antifuse-based FPGA devices. The primary concerns are the impact of the technology and the overall array architecture on the flexibility of programming, the bandwidth available for high-speed analog circuits, and the accuracy of the components for highperformance applications.", "title": "" }, { "docid": "774df4733d98b781f32222cf843ec381", "text": "This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a nonlinear transformation between the joint feature/label space distributions of the two domain Ps and Pt that can be estimated with optimal transport. We propose a solution of this problem that allows to recover an estimated target P t = (X, f(X)) by optimizing simultaneously the optimal coupling and f . We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.", "title": "" }, { "docid": "b7a08eaeb69fa6206cb9aec9cc54f2c3", "text": "This paper describes a computational pragmatic model which is geared towards providing helpful answers to modal and hypothetical questions. The work brings together elements from fonna l . semantic theories on modality m~d question answering, defines a wkler, pragmatically flavoured, notion of answerhood based on non-monotonic inference aod develops a notion of context, within which aspects of more cognitively oriented theories, such as Relevance Theory, can be accommodated. The model has been inlplemented. The research was fundexl by ESRC grant number R000231279.", "title": "" }, { "docid": "ca905aef2477905783f7d18be841f99b", "text": "PURPOSE\nHumans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit.\n\n\nMETHODS\nIn experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field.\n\n\nRESULTS\nPursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.", "title": "" }, { "docid": "3907bddf6a56b96c4e474d46ddd04359", "text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.", "title": "" }, { "docid": "024cc15c164656f90ade55bf3c391405", "text": "Unmanned aerial vehicles (UAVs), also known as drones have many applications and they are a current trend across many industries. They can be used for delivery, sports, surveillance, professional photography, cinematography, military combat, natural disaster assistance, security, and the list grows every day. Programming opens an avenue to automate many processes of daily life and with the drone as aerial programmable eyes, security and surveillance can become more efficient and cost effective. At Barry University, parking is becoming an issue as the number of people visiting the school greatly outnumbers the convenient parking locations. This has caused a multitude of hazards in parking lots due to people illegally parking, as well as unregistered vehicles parking in reserved areas. In this paper, we explain how automated drone surveillance is utilized to detect unauthorized parking at Barry University. The automated process is incorporated into Java application and completed in three steps: collecting visual data, processing data automatically, and sending automated responses and queues to the operator of the system.", "title": "" } ]
scidocsrr
04c4db8f036aa52d79c114f8157b8631
Gan-Based Domain Adaptation for Object Classification
[ { "docid": "100c152685655ad6865f740639dd7d57", "text": "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.", "title": "" }, { "docid": "bc4ce5871c530bad6f87708328e08531", "text": "Detecting vehicles in aerial images provides important information for traffic management and urban planning. Detecting the cars in the images is challenging due to the relatively small size of the target objects and the complex background in man-made areas. It is particularly challenging if the goal is near-real-time detection, i.e., within few seconds, on large images without any additional information, e.g., road database and accurate target size. We present a method that can detect the vehicles on a 21-MPixel original frame image without accurate scale information within seconds on a laptop single threaded. In addition to the bounding box of the vehicles, we extract also orientation and type (car/truck) information. First, we apply a fast binary detector using integral channel features in a soft-cascade structure. In the next step, we apply a multiclass classifier on the output of the binary detector, which gives the orientation and type of the vehicles. We evaluate our method on a challenging data set of original aerial images over Munich and a data set captured from an unmanned aerial vehicle (UAV).", "title": "" } ]
[ { "docid": "1561ef2d0c846e8faa765aae2a7ad922", "text": "We propose a novel monocular visual inertial odometry algorithm that combines the advantages of EKF-based approaches with those of direct photometric error minimization methods. The method is based on sparse, very small patches and incorporates the minimization of photometric error directly into the EKF measurement model so that inertial data and vision-based surface measurements are used simultaneously during camera pose estimation. We fuse vision-based and inertial measurements almost at the raw-sensor level, allowing the estimated system state to constrain and guide image-space measurements. Our formulation allows for an efficient implementation that runs in real-time on a standard CPU and has several appealing and unique characteristics such as being robust to fast camera motion, in particular rotation, and not depending on the presence of corner-like features in the scene. We experimentally demonstrate robust and accurate performance compared to ground truth and show that our method works on scenes containing only non-intersecting lines.", "title": "" }, { "docid": "364f9c36bef260cc938d04ff3b4f4c67", "text": "We propose a scalable, efficient and accurate approach to retrieve 3D models for objects in the wild. Our contribution is twofold. We first present a 3D pose estimation approach for object categories which significantly outperforms the state-of-the-art on Pascal3D+. Second, we use the estimated pose as a prior to retrieve 3D models which accurately represent the geometry of objects in RGB images. For this purpose, we render depth images from 3D models under our predicted pose and match learned image descriptors of RGB images against those of rendered depth images using a CNN-based multi-view metric learning approach. In this way, we are the first to report quantitative results for 3D model retrieval on Pascal3D+, where our method chooses the same models as human annotators for 50% of the validation images on average. In addition, we show that our method, which was trained purely on Pascal3D+, retrieves rich and accurate 3D models from ShapeNet given RGB images of objects in the wild.", "title": "" }, { "docid": "e2d25382acd23c9431ccd3905d8bf13a", "text": "Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.", "title": "" }, { "docid": "88b89521775ba2d8570944a54e516d0f", "text": "The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the “physiological envelope” during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.", "title": "" }, { "docid": "40746bfccf801222d99151dc4b4cb7e8", "text": "Fingerprints are the oldest and most widely used form of biometric identification. Everyone is known to have unique, immutable fingerprints. As most Automatic Fingerprint Recognition Systems are based on local ridge features known as minutiae, marking minutiae accurately and rejecting false ones is very important. However, fingerprint images get degraded and corrupted due to variations in skin and impression conditions. Thus, image enhancement techniques are employed prior to minutiae extraction. A critical step in automatic fingerprint matching is to reliably extract minutiae from the input fingerprint images. This paper presents a review of a large number of techniques present in the literature for extracting fingerprint minutiae. The techniques are broadly classified as those working on binarized images and those that work on gray scale images directly.", "title": "" }, { "docid": "5f7adc28fab008d93a968b6a1e5ad061", "text": "This paper describes recent approaches using text-mining to automatically profile and extract arguments from legal cases. We outline some of the background context and motivations. We then turn to consider issues related to the construction and composition of a corpora of legal cases. We show how a Context-Free Grammar can be used to extract arguments, and how ontologies and Natural Language Processing can identify complex information such as case factors and participant roles. Together the results bring us closer to automatic identification of legal arguments.", "title": "" }, { "docid": "d5a2fa9be5bbce163de803a7583503f8", "text": "We compared the possibility of detecting hidden objects covered with various types of clothing by using passive imagers operating in a terahertz (THz) range at 1.2 mm (250 GHz) and a mid-wavelength infrared at 3-6 μm (50-100 THz). We investigated theoretical limitations, performance of imagers, and physical properties of fabrics in both the regions. In order to investigate the time stability of detection, we performed measurements in sessions each lasting 30 min. We present a theoretical comparison of two spectra, as well as the results of experiments. In order to compare the capabilities of passive imaging of hidden objects, we combined the properties of textiles, performance of imagers, and properties of radiation in both spectral ranges. The paper presents the comparison of the original results of measurement sessions for the two spectrums with analysis.", "title": "" }, { "docid": "42eca5d49ef3e27c76b65f8feccd8499", "text": "Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.", "title": "" }, { "docid": "ca9c4512d2258a44590a298879219970", "text": "I propose a common framework that combines three different paradigms in machine learning: generative, discriminative and imitative learning. A generative probabilistic distribution is a principled way to model many machine learning and machine perception problems. Therein, one provides domain specific knowledge in terms of structure and parameter priors over the joint space of variables. Bayesian networks and Bayesian statistics provide a rich and flexible language for specifying this knowledge and subsequently refining it with data and observations. The final result is a distribution that is a good generator of novel exemplars. Conversely, discriminative algorithms adjust a possibly non-distributional model to data optimizing for a specific task, such as classification or prediction. This typically leads to superior performance yet compromises the flexibility of generative modeling. I present Maximum Entropy Discrimination (MED) as a framework to combine both discriminative estimation and generative probability densities. Calculations involve distributions over parameters, margins, and priors and are provably and uniquely solvable for the exponential family. Extensions include regression, feature selection, and transduction. SVMs are also naturally subsumed and can be augmented with, for example, feature selection, to obtain substantial improvements. To extend to mixtures of exponential families, I derive a discriminative variant of the ExpectationMaximization (EM) algorithm for latent discriminative learning (or latent MED). While EM and Jensen lower bound log-likelihood, a dual upper bound is made possible via a novel reverse-Jensen inequality. The variational upper bound on latent log-likelihood has the same form as EM bounds, is computable efficiently and is globally guaranteed. It permits powerful discriminative learning with the wide range of contemporary probabilistic mixture models (mixtures of Gaussians, mixtures of multinomials and hidden Markov models). We provide empirical results on standardized data sets that demonstrate the viability of the hybrid discriminative-generative approaches of MED and reverse-Jensen bounds over state of the art discriminative techniques or generative approaches. Subsequently, imitative learning is presented as another variation on generative modeling which also learns from exemplars from an observed data source. However, the distinction is that the generative model is an agent that is interacting in a much more complex surrounding external world. It is not efficient to model the aggregate space in a generative setting. I demonstrate that imitative learning (under appropriate conditions) can be adequately addressed as a discriminative prediction task which outperforms the usual generative approach. This discriminative-imitative learning approach is applied with a generative perceptual system to synthesize a real-time agent that learns to engage in social interactive behavior. Thesis Supervisor: Alex Pentland Title: Toshiba Professor of Media Arts and Sciences, MIT Media Lab Discriminative, Generative and Imitative Learning", "title": "" }, { "docid": "ffcfd70d0764bd8a711b1afd9e5fcc29", "text": "Life satisfaction refers to a somewhat stable cognitive assessment of one's own life. Life satisfaction is an important component of subjective well being, the scientific term for happiness. The other component is affect: the balance between the presence of positive and negative emotions in daily life. While affect has been studied using social media datasets (particularly from Twitter), life satisfaction has received little to no attention. Here, we examine trends in posts about life satisfaction from a two-year sample of Twitter data. We apply a surveillance methodology to extract expressions of both satisfaction and dissatisfaction with life. A noteworthy result is that consistent with their definitions trends in life satisfaction posts are immune to external events (political, seasonal etc.) unlike affect trends reported by previous researchers. Comparing users we find differences between satisfied and dissatisfied users in several linguistic, psychosocial and other features. For example the latter post more tweets expressing anger, anxiety, depression, sadness and on death. We also study users who change their status over time from satisfied with life to dissatisfied or vice versa. Noteworthy is that the psychosocial tweet features of users who change from satisfied to dissatisfied are quite different from those who stay satisfied over time. Overall, the observations we make are consistent with intuition and consistent with observations in the social science research. This research contributes to the study of the subjective well being of individuals through social media.", "title": "" }, { "docid": "897fdff0ef02518435cc25b7ee9fce65", "text": "Fire outage is one of the phenomena that still pose a serious challenge to the security of lives and properties. Fire, being an important process that affects ecological systems across the globe has both positive and negative effects. It has been used by humans for cooking, generating heat, signalling and propulsion purposes. However soil erosion, atmospheric pollution and hazard to life and property are majorly the negative effects. Although the use of wildland fire, controlled burns, and provision of fire fighting services is put in place to prevent the outbreak, it is only evident in most developed countries. Fire accident creates serious health and safety hazard in developing countries, which also resulted into catastrophic situation. Associated with it is unnecessary injury or complete loss of lives in one hand, partial or complete damage to expensive and valuable properties on the other hand. This huge loss is inestimably enormous; hence this paper proposes the development of a GSM -based fire detector system. A cost effective system that detects fire or smoke and sends alert information to a mobile phone for quick and immediate action thereby, avoiding unnecessary and costly industrial and domestic breakdown.", "title": "" }, { "docid": "70e8615bd0e26139de31d052604c1c08", "text": "Users of the Twitter microblogging platform share a vast amount of information about various topics through short messages on a daily basis. Some of these so called tweets include information that is relevant for software companies and could, for example, help requirements engineers to identify user needs. Therefore, tweets have the potential to aid in the continuous evolution of software applications. Despite the existence of such relevant tweets, little is known about their number and content. In this paper we report on the results of an exploratory study in which we analyzed the usage characteristics, content and automatic classification potential of tweets about software applications by using descriptive statistics, content analysis and machine learning techniques. Although the manual search of relevant information within the vast stream of tweets can be compared to looking for a needle in a haystack, our analysis shows that tweets provide a valuable input for software companies. Furthermore, our results demonstrate that machine learning techniques have the capacity to identify and harvest relevant information automatically.", "title": "" }, { "docid": "43beba8ec2a324546bce095e9c1d9f0c", "text": "Scenario-based specifications such as Message Sequence Charts (MSCs) are useful as part of a requirements specification. A scenario is a partial story, describing how system components, the environment, and users work concurrently and interact in order to provide system level functionality. Scenarios need to be combined to provide a more complete description of system behavior. Consequently, scenario synthesis is central to the effective use of scenario descriptions. How should a set of scenarios be interpreted? How do they relate to one another? What is the underlying semantics? What assumptions are made when synthesizing behavior models from multiple scenarios? In this paper, we present an approach to scenario synthesis based on a clear sound semantics, which can support and integrate many of the existing approaches to scenario synthesis. The contributions of the paper are threefold. We first define an MSC language with sound abstract semantics in terms of labeled transition systems and parallel composition. The language integrates existing approaches based on scenario composition by using high-level MSCs (hMSCs) and those based on state identification by introducing explicit component state labeling. This combination allows stakeholders to break up scenario specifications into manageable parts and reuse scenarios using hMCSs; it also allows them to introduce additional domainspecific information and general assumptions explicitly into the scenario specification using state labels. Second, we provide a sound synthesis algorithm which translates scenarios into a behavioral specification in the form of Finite Sequential Processes. This specification can be analyzed with the Labeled Transition System Analyzer using model checking and animation. Finally, we demonstrate how many of the assumptions embedded in existing synthesis approaches can be made explicit and modeled in our approach. Thus, we provide the basis for a common approach to scenario-based specification, synthesis, and analysis.", "title": "" }, { "docid": "932dc0c02047cd701e41530c42d830bc", "text": "The concept of \"extra-cortical organization of higher mental functions\" proposed by Lev Vygotsky and expanded by Alexander Luria extends cultural-historical psychology regarding the interplay of natural and cultural factors in the development of the human mind. Using the example of self-regulation, the authors explore the evolution of this idea from its origins to recent findings on the neuropsychological trajectories of the development of executive functions. Empirical data derived from the Tools of the Mind project are used to discuss the idea of using classroom intervention to study the development of self-regulation in early childhood.", "title": "" }, { "docid": "6f609fef5fd93e776fd7d43ed91fd4a8", "text": "Wandering is among the most frequent, problematic, and dangerous behaviors for elders with dementia. Frequent wanderers likely suffer falls and fractures, which affect the safety and quality of their lives. In order to monitor outdoor wandering of elderly people with dementia, this paper proposes a real-time method for wandering detection based on individuals' GPS traces. By representing wandering traces as loops, the problem of wandering detection is transformed into detecting loops in elders' mobility trajectories. Specifically, the raw GPS data is first preprocessed to remove noisy and crowded points by performing an online mean shift clustering. A novel method called θ_WD is then presented that is able to detect loop-like traces on the fly. The experimental results on the GPS datasets of several elders have show that the θ_WD method is effective and efficient in detecting wandering behaviors, in terms of detection performance (AUC > 0.99, and 90% detection rate with less than 5 % of the false alarm rate), as well as time complexity.", "title": "" }, { "docid": "3e27d92261164961dd9ba40f483bdcf8", "text": "Several studies estimate the prevalence of gender dysphoria among adults by examining the number of individuals turning to health services. Since individuals might be hesitant to seek medical care related to gender dysphoria, these studies could underestimate the prevalence. The studies also lack information regarding the variance among different aspects of gender dysphoric conditions. Therefore, the current study estimated the prevalence by examining self-reported gender identity and dysphoria in a Dutch population sample (N = 8,064, aged 15-70 years old). Three measures assessed aspects of gender dysphoria: gender identity, dislike of the natal female/male body, and wish to obtain hormones/sex reassignment surgery. Results showed that 4.6 % of the natal men and 3.2 % of the natal women reported an ambivalent gender identity (equal identification with other sex as with sex assigned at birth) and 1.1 % of the natal men and 0.8 % of the natal women reported an incongruent gender identity (stronger identification with other sex as with sex assigned at birth). Lower percentages reported a dislike of their natal body and/or a wish for hormones/surgery. Combining these figures estimated the percentage of men reporting an ambivalent or incongruent gender identity combined with a dislike of their male body and a wish to obtain hormones/surgery at 0.6 %. For women, this was 0.2 %. These novel findings show that studies based on the number of individuals seeking medical care might underestimate the prevalence of gender dysphoria. Furthermore, the findings argue against a dichotomous approach to gender dysphoria.", "title": "" }, { "docid": "af3af0a4102ea0fb555cad52e4cafa50", "text": "The identification of the exact positions of the first and second heart sounds within a phonocardiogram (PCG), or heart sound segmentation, is an essential step in the automatic analysis of heart sound recordings, allowing for the classification of pathological events. While threshold-based segmentation methods have shown modest success, probabilistic models, such as hidden Markov models, have recently been shown to surpass the capabilities of previous methods. Segmentation performance is further improved when apriori information about the expected duration of the states is incorporated into the model, such as in a hidden semiMarkov model (HSMM). This paper addresses the problem of the accurate segmentation of the first and second heart sound within noisy real-world PCG recordings using an HSMM, extended with the use of logistic regression for emission probability estimation. In addition, we implement a modified Viterbi algorithm for decoding the most likely sequence of states, and evaluated this method on a large dataset of 10 172 s of PCG recorded from 112 patients (including 12 181 first and 11 627 second heart sounds). The proposed method achieved an average F1 score of 95.63 ± 0.85%, while the current state of the art achieved 86.28 ± 1.55% when evaluated on unseen test recordings. The greater discrimination between states afforded using logistic regression as opposed to the previous Gaussian distribution-based emission probability estimation as well as the use of an extended Viterbi algorithm allows this method to significantly outperform the current state-of-the-art method based on a two-sided paired t-test.", "title": "" }, { "docid": "235899b940c658316693d0a481e2d954", "text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.", "title": "" }, { "docid": "25828231caaf3288ed4fdb27df7f8740", "text": "This paper reports on an algorithm to support autonomous vehicles in reasoning about occluded regions of their environment to make safe, reliable decisions. In autonomous driving scenarios, other traffic participants are often occluded from sensor measurements by buildings or large vehicles like buses or trucks, which makes tracking dynamic objects challenging.We present a method to augment standard dynamic object trackers with means to 1) estimate the occluded state of other traffic agents and 2) robustly associate the occluded estimates with new observations after the tracked object reenters the visible region of the sensor horizon. We perform occluded state estimation using a dynamics model that accounts for the driving behavior of traffic agents and a hybrid Gaussian mixture model (hGMM) to capture multiple hypotheses over discrete behavior, such as driving along different lanes or turning left or right at an intersection. Upon new observations, we associate them to existing estimates in terms of the Kullback-Leibler divergence (KLD). We evaluate the proposed method in simulation and using a real-world traffic-tracking dataset from an autonomous vehicle platform. Results show that our method can handle significantly prolonged occlusions when compared to a standard dynamic object tracking system.", "title": "" }, { "docid": "434ea2b009a1479925ce20e8171aea46", "text": "Several high-voltage silicon carbide (SiC) devices have been demonstrated over the past few years, and the latest-generation devices are showing even faster switching, and greater current densities. However, there are no commercial gate drivers that are suitable for these high-voltage, high-speed devices. Consequently, there has been a great research effort into the development of gate drivers for high-voltage SiC transistors. This work presents the first detailed report on the design and testing of a high-power-density, high-speed, and high-noise-immunity gate drive for a high-current, 10 kV SiC MOSFET module.", "title": "" } ]
scidocsrr
80df194bf7f0aedd9a14fb55de2b3856
The Body and the Beautiful: Health, Attractiveness and Body Composition in Men’s and Women’s Bodies
[ { "docid": "6210a0a93b97a12c2062ac78953f3bd1", "text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.", "title": "" } ]
[ { "docid": "dabbcd5d79b011b7d091ef3a471d9779", "text": "This paper borrows ideas from social science to inform the design of novel \"sensing\" user-interfaces for computing technology. Specifically, we present five design challenges inspired by analysis of human-human communication that are mundanely addressed by traditional graphical user interface designs (GUIs). Although classic GUI conventions allow us to finesse these questions, recent research into innovative interaction techniques such as 'Ubiquitous Computing' and 'Tangible Interfaces' has begun to expose the interaction challenges and problems they pose. By making them explicit we open a discourse on how an approach similar to that used by social scientists in studying human-human interaction might inform the design of novel interaction mechanisms that can be used to handle human-computer communication accomplishments", "title": "" }, { "docid": "9d2ec490b7efb23909abdbf5f209f508", "text": "Terrestrial Laser scanner (TLS) has been widely used in our recent architectural heritage projects and huge quantity of point cloud data was gotten. In order to process the huge quantity of point cloud data effectively and reconstruct their 3D models, more effective methods should be developed based on existing automatic or semiautomatic point cloud processing algorithms. Here introduce a new algorithm for rapid extracting the pillar features of Chinese ancient buildings from their point cloud data, the algorithm has the least human interaction in the data processing and is more efficient to extract pillars from point cloud data than existing feature extracting algorithms. With this algorithm we identify the pillar features by dividing the point cloud into slices firstly, and then get the projective parameters of pillar objects in selected slices, the next compare the local projective parameters in adjacent slices, the next combine them to get the global parameters of the pillars and at last reconstruct the 3d pillar models.", "title": "" }, { "docid": "bd3717bd46869b9be3153478cbd19f2a", "text": "The study was conducted to assess the effectiveness of jasmine oil massage on labour pain during first stage of labour among 40 primigravida women. The study design adopted was true experimental approach with pre-test post-test control group design. The demographic Proforma were collected from the women by interview and Visual analogue scale was used to measure the level of labour pain in both the groups. Data obtained in these areas were analysed by descriptive and inferential statistics. A significant difference was found in the experimental group( t 9.869 , p<0.05) . A significant difference was found between experimental group and control group. cal", "title": "" }, { "docid": "4bd123c2c44e703133e9a6093170db39", "text": "This paper presents a single-phase cascaded H-bridge converter for a grid-connected photovoltaic (PV) application. The multilevel topology consists of several H-bridge cells connected in series, each one connected to a string of PV modules. The adopted control scheme permits the independent control of each dc-link voltage, enabling, in this way, the tracking of the maximum power point for each string of PV panels. Additionally, low-ripple sinusoidal-current waveforms are generated with almost unity power factor. The topology offers other advantages such as the operation at lower switching frequency or lower current ripple compared to standard two-level topologies. Simulation and experimental results are presented for different operating conditions.", "title": "" }, { "docid": "e637dc1aee0632f61a29c8609187a98b", "text": "Scene coordinate regression has become an essential part of current camera re-localization methods. Different versions, such as regression forests and deep learning methods, have been successfully applied to estimate the corresponding camera pose given a single input image. In this work, we propose to regress the scene coordinates pixel-wise for a given RGB image by using deep learning. Compared to the recent methods, which usually employ RANSAC to obtain a robust pose estimate from the established point correspondences, we propose to regress confidences of these correspondences, which allows us to immediately discard erroneous predictions and improve the initial pose estimates. Finally, the resulting confidences can be used to score initial pose hypothesis and aid in pose refinement, offering a generalized solution to solve this task.", "title": "" }, { "docid": "7ce9ef05d3f4a92f6b187d7986b70be1", "text": "With the growth in the consumer electronics industry, it is vital to develop an algorithm for ultrahigh definition products that is more effective and has lower time complexity. Image interpolation, which is based on an autoregressive model, has achieved significant improvements compared with the traditional algorithm with respect to image reconstruction, including a better peak signal-to-noise ratio (PSNR) and improved subjective visual quality of the reconstructed image. However, the time-consuming computation involved has become a bottleneck in those autoregressive algorithms. Because of the high time cost, image autoregressive-based interpolation algorithms are rarely used in industry for actual production. In this study, in order to meet the requirements of real-time reconstruction, we use diverse compute unified device architecture (CUDA) optimization strategies to make full use of the graphics processing unit (GPU) (NVIDIA Tesla K80), including a shared memory and register and multi-GPU optimization. To be more suitable for the GPU-parallel optimization, we modify the training window to obtain a more concise matrix operation. Experimental results show that, while maintaining a high PSNR and subjective visual quality and taking into account the I/O transfer time, our algorithm achieves a high speedup of 147.3 times for a Lena image and 174.8 times for a 720p video, compared to the original single-threaded C CPU code with -O2 compiling optimization.", "title": "" }, { "docid": "a8d6a864092b3deb58be27f0f76b02c2", "text": "High-quality word representations have been very successful in recent years at improving performance across a variety of NLP tasks. These word representations are the mappings of each word in the vocabulary to a real vector in the Euclidean space. Besides high performance on specific tasks, learned word representations have been shown to perform well on establishing linear relationships among words. The recently introduced skipgram model improved performance on unsupervised learning of word embeddings that contains rich syntactic and semantic word relations both in terms of accuracy and speed. Word embeddings that have been used frequently on English language, is not applied to Turkish yet. In this paper, we apply the skip-gram model to a large Turkish text corpus and measured the performance of them quantitatively with the \"question\" sets that we generated. The learned word embeddings and the question sets are publicly available at our website. Keywords—Word embeddings, Natural Language Processing, Deep Learning", "title": "" }, { "docid": "67a3f92ab8c5a6379a30158bb9905276", "text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.", "title": "" }, { "docid": "41d32df9d58f9c38f75010c87c0c3327", "text": "Evidence from many countries in recent years suggests that collateral values and recovery rates on corporate defaults can be volatile and, moreover, that they tend to go down just when the number of defaults goes up in economic downturns. This link between recovery rates and default rates has traditionally been neglected by credit risk models, as most of them focused on default risk and adopted static loss assumptions, treating the recovery rate either as a constant parameter or as a stochastic variable independent from the probability of default. This traditional focus on default analysis has been partly reversed by the recent significant increase in the number of studies dedicated to the subject of recovery rate estimation and the relationship between default and recovery rates. This paper presents a detailed review of the way credit risk models, developed during the last thirty years, treat the recovery rate and, more specifically, its relationship with the probability of default of an obligor. We also review the efforts by rating agencies to formally incorporate recovery ratings into their assessment of corporate loan and bond credit risk and the recent efforts by the Basel Committee on Banking Supervision to consider “downturn LGD” in their suggested requirements under Basel II. Recent empirical evidence concerning these issues and the latest data on high-yield bond and leverage loan defaults is also presented and discussed.", "title": "" }, { "docid": "db36273a3669e1aeda1bf2c5ab751387", "text": "Autonomous Ground Vehicles designed for dynamic environments require a reliable perception of the real world, in terms of obstacle presence, position and speed. In this paper we present a flexible technique to build, in real time, a dense voxel-based map from a 3D point cloud, able to: (1) discriminate between stationary and moving obstacles; (2) provide an approximation of the detected obstacle's absolute speed using the information of the vehicle's egomotion computed through a visual odometry approach. The point cloud is first sampled into a full 3D map based on voxels to preserve the tridimensional information; egomotion information allows computational efficiency in voxels creation; then voxels are processed using a flood fill approach to segment them into a clusters structure; finally, with the egomotion information, the obtained clusters are labeled as stationary or moving obstacles, and an estimation of their speed is provided. The algorithm runs in real time; it has been tested on one of VisLab's AGVs using a modified SGM-based stereo system as 3D data source.", "title": "" }, { "docid": "01962e512740addbe5f444ed581ebb48", "text": "We investigate how neural, encoder-decoder translation systems output target strings of appropriate lengths, finding that a collection of hidden units learns to explicitly implement this functionality.", "title": "" }, { "docid": "262c11ab9f78e5b3f43a31ad22cf23c5", "text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.", "title": "" }, { "docid": "1a0ed30b64fa7f8d39a12acfcadfd763", "text": "This letter presents a smart shelf configuration for radio frequency identification (RFID) application. The proposed shelf has an embedded leaking microstrip transmission line with extended ground plane. This structure, when connected to an RFID reader, allows detecting tagged objects in close proximity with proper field confinement to avoid undesired reading of neighboring shelves. The working frequency band covers simultaneously the three world assigned RFID subbands at ultrahigh frequency (UHF). The concept is explored by full-wave simulations and it is validated with thorough experimental tests.", "title": "" }, { "docid": "ff8089430cdae3e733b06a7aa1b759b4", "text": "We derive a model for consumer loan default and credit card expenditure. The default model is based on statistical models for discrete choice, in contrast to the usual procedure of linear discriminant analysis. The model is then extended to incorporate the default probability in a model of expected profit. The technique is applied to a large sample of applications and expenditure from a major credit card company. The nature of the data mandates the use of models of sample selection for estimation. The empirical model for expected profit produces an optimal acceptance rate for card applications which is far higher than the observed rate used by the credit card vendor based on the discriminant analysis. I am grateful to Terry Seaks for valuable comments on an earlier draft of this paper and to Jingbin Cao for his able research assistance. The provider of the data and support for this project has requested anonymity, so I must thank them as such. Their help and support are gratefully acknowledged. Participants in the applied econometrics workshop at New York University also provided useful commentary.", "title": "" }, { "docid": "fb2287cb1c41441049288335f10fd473", "text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly", "title": "" }, { "docid": "92da117d31574246744173b339b0d055", "text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.", "title": "" }, { "docid": "bf294a4c3af59162b2f401e2cdcb060b", "text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.", "title": "" }, { "docid": "10318d39b3ad18779accbf29b2f00fcd", "text": "Designing convolutional neural networks (CNN) models for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant effort has been dedicated to design and improve mobile models on all three dimensions, it is challenging to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated neural architecture search approach for designing resourceconstrained mobile CNN models. We propose to explicitly incorporate latency information into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike in previous work, where mobile latency is considered via another, often inaccurate proxy (e.g., FLOPS), in our experiments, we directly measure real-world inference latency by executing the model on a particular platform, e.g., Pixel phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that permits layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our model achieves 74.0% top-1 accuracy with 76ms latency on a Pixel phone, which is 1.5× faster than MobileNetV2 (Sandler et al. 2018) and 2.4× faster than NASNet (Zoph et al. 2018) with the same top-1 accuracy. On the COCO object detection task, our model family achieves both higher mAP quality and lower latency than MobileNets.", "title": "" }, { "docid": "f6a9670544a784a5fc431746557473a3", "text": "Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on co-located or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas N. Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today's conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phase-drifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with N while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make √N the circuit power increase as N, instead of linearly, by careful circuit-aware system design.", "title": "" }, { "docid": "fa20b9427a8dcfd8db90e0a6eb5e7d8c", "text": "Recent functional brain imaging studies suggest that object concepts may be represented, in part, by distributed networks of discrete cortical regions that parallel the organization of sensory and motor systems. In addition, different regions of the left lateral prefrontal cortex, and perhaps anterior temporal cortex, may have distinct roles in retrieving, maintaining and selecting semantic information.", "title": "" } ]
scidocsrr
29dcd6d1734a3189af4c4f89268984e7
FeelSleeve: Haptic Feedback to Enhance Early Reading
[ { "docid": "0a929fa28caa0138c1283d7f54ecccc9", "text": "While predictions abound that electronic books will supplant traditional paper-based books, many people bemoan the coming loss of the book as cultural artifact. In this project we deliberately keep the affordances of paper books while adding electronic augmentation. The Listen Reader combines the look and feel of a real book - a beautiful binding, paper pages and printed images and text - with the rich, evocative quality of a movie soundtrack. The book's multi-layered interactive soundtrack consists of music and sound effects. Electric field sensors located in the book binding sense the proximity of the reader's hands and control audio parameters, while RFID tags embedded in each page allow fast, robust page identification.\nThree different Listen Readers were built as part of a six-month museum exhibit, with more than 350,000 visitors. This paper discusses design, implementation, and lessons learned through the iterative design process, observation, and visitor interviews.", "title": "" } ]
[ { "docid": "efb81d85abcf62f4f3747a58154c5144", "text": "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion. Our code is available at https://github.com/sergeytulyakov/mocogan.", "title": "" }, { "docid": "cfec86113f10f5466ab4778d498eee94", "text": "The platooning of connected and automated vehicles (CAVs) is expected to have a transformative impact on road transportation, e.g., enhancing highway safety, improving traffic utility, and reducing fuel consumption. Requiring only local information, distributed control schemes are scalable approaches to the coordination of multiple CAVs without using centralized communication and computation. From the perspective of multi-agent consensus control, this paper introduces a decomposition framework to model, analyze, and design the platoon system. In this framework, a platoon is naturally decomposed into four interrelated components, i.e., 1) node dynamics, 2) information flow network, 3) distributed controller, and 4) geometry formation. The classic model of each component is summarized according to the results of the literature survey; four main performance metrics, i.e., internal stability, stability margin, string stability, and coherence behavior, are discussed in the same fashion. Also, the basis of typical distributed control techniques is presented, including linear consensus control, distributed robust control, distributed sliding mode control, and distributed model predictive control.", "title": "" }, { "docid": "0d23946f8a94db5943deee81deb3f322", "text": "The Spatial Semantic Hierarchy is a model of knowledge of large-scale space consisting of multiple interacting representations, both qualitative and quantitative. The SSH is inspired by the properties of the human cognitive map, and is intended to serve both as a model of the human cognitive map and as a method for robot exploration and map-building. The multiple levels of the SSH express states of partial knowledge, and thus enable the human or robotic agent to deal robustly with uncertainty during both learning and problem-solving. The control level represents useful patterns of sensorimotor interaction with the world in the form of trajectory-following and hill-climbing control laws leading to locally distinctive states. Local geometric maps in local frames of reference can be constructed at the control level to serve as observers for control laws in particular neighborhoods. The causal level abstracts continuous behavior among distinctive states into a discrete model consisting of states linked by actions. The topological level introduces the external ontology of places, paths and regions by abduction to explain the observed pattern of states and actions at the causal level. Quantitative knowledge at the control, causal and topological levels supports a “patchwork map” of local geometric frames of reference linked by causal and topological connections. The patchwork map can be merged into a single global frame of reference at the metrical level when sufficient information and computational resources are available. We describe the assumptions and guarantees behind the generality of the SSH across environments and sensorimotor systems. Evidence is presented from several partial implementations of the SSH on simulated and physical robots.  2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "ed33687781081638ea885e6610ff6010", "text": "Temporal data mining is the application of data mining techniques to data that takes the time dimension into account. This paper studies changes in cluster characteristics of supermarket customers over a 24 week period. Such an analysis can be useful for formulating marketing strategies. Marketing managers may want to focus on specific groups of customers. Therefore they may need to understand the migrations of the customers from one group to another group. The marketing strategies may depend on the desirability of these cluster migrations. The temporal analysis presented here is based on conventional and modified Kohonen self organizing maps (SOM). The modified Kohonen SOM creates interval set representations of clusters using properties of rough sets. A description of an experimental design for temporal cluster migration studies 0020-0255/$ see front matter 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2004.12.007 * Corresponding author. Tel.: +1 902 420 5798; fax: +1 902 420 5035. E-mail address: pawan.lingras@smu.ca (P. Lingras). 216 P. Lingras et al. / Information Sciences 172 (2005) 215–240 including, data cleaning, data abstraction, data segmentation, and data sorting, is provided. The paper compares conventional and non-conventional (interval set) clustering techniques, as well as temporal and non-temporal analysis of customer loyalty. The interval set clustering is shown to provide an interesting dimension to such a temporal analysis. 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "8b552849d9c41d82171de2e87967836c", "text": "The need for building robots with soft materials emerged recently from considerations of the limitations of service robots in negotiating natural environments, from observation of the role of compliance in animals and plants [1], and even from the role attributed to the physical body in movement control and intelligence, in the so-called embodied intelligence or morphological computation paradigm [2]-[4]. The wide spread of soft robotics relies on numerous investigations of diverse materials and technologies for actuation and sensing, and on research of control techniques, all of which can serve the purpose of building robots with high deformability and compliance. But the core challenge of soft robotics research is, in fact, the variability and controllability of such deformability and compliance.", "title": "" }, { "docid": "640824047e480ef5582d140b6595dbd9", "text": "A wideband transition from coplanar waveguide (CPW) to substrate integrated waveguide (SIW) is proposed and presented in the 50 GHz frequency range. Electrically thick alumina was used in this case, representative for other high-permittivity substrates such as semiconductors. Simulations predict less than -15 dB return loss within a 35 % bandwidth. CPW probe measurements were carried out and 40 % bandwidth were achieved at -0.5 dB insertion loss for a single transition. Modified SIW via configurations being suitable for simplified fabrication on electrically thick substrates in the upper millimeter-wave spectrum are discussed in the second part.", "title": "" }, { "docid": "cf998ec01aefef7cd80d2fdd25e872e1", "text": "Shunting inhibition, a conductance increase with a reversal potential close to the resting potential of the cell, has been shown to have a divisive effect on subthreshold excitatory postsynaptic potential amplitudes. It has therefore been assumed to have the same divisive effect on firing rates. We show that shunting inhibition actually has a subtractive effecton the firing rate in most circumstances. Averaged over several interspike intervals, the spiking mechanism effectively clamps the somatic membrane potential to a value significantly above the resting potential, so that the current through the shunting conductance is approximately independent of the firing rate. This leads to a subtractive rather than a divisive effect. In addition, at distal synapses, shunting inhibition will also have an approximately subtractive effect if the excitatory conductance is not small compared to the inhibitory conductance. Therefore regulating a cell's passive membrane conductancefor instance, via massive feedbackis not an adequate mechanism for normalizing or scaling its output.", "title": "" }, { "docid": "f4c51f4790114c42bef19ff421c83f0d", "text": "Real-time systems are growing in complexity and realtime and soft real-time applications are becoming common in general-purpose computing environments. Thus, there is a growing need for scheduling solutions that simultaneously support processes with a variety of different timeliness constraints. Toward this goal we have developed the Resource Allocation/Dispatching (RAD) integrated scheduling model and the Rate-Based Earliest Deadline (RBED) integrated multi-class real-time scheduler based on this model. We present RAD and the RBED scheduler and formally prove the correctness of the operations that RBED employs. We then describe our implementation of RBED and present results demonstrating how RBED simultaneously and seamlessly supports hard real-time, soft real-time, and best-effort processes.", "title": "" }, { "docid": "5663c9fc6eb66c718235e51d8932dab4", "text": "As the number of academic papers and new technologies soars, it has been increasingly difficult for researchers, especially beginners, to enter a new research field. Researchers often need to study a promising paper in depth to keep up with the forefront of technology. Traditional Query-Oriented study method is time-consuming and even tedious. For a given paper, existent academic search engines like Google Scholar tend to recommend relevant papers, failing to reveal the knowledge structure. The state-of-the-art MapOriented study methods such as AMiner and AceMap can structure scholar information, but they’re too coarse-grained to dig into the underlying principles of a specific paper. To address this problem, we propose a Study-Map Oriented method and a novel model called RIDP (Reference Injection based Double-Damping PageRank) to help researchers study a given paper more efficiently and thoroughly. RIDP integrates newly designed Reference Injection based Topic Analysis method and Double-Damping PageRank algorithm to mine a Study Map out of massive academic papers in order to guide researchers to dig into the underlying principles of a specific paper. Experiment results on real datasets and pilot user studies indicate that our method can help researchers acquire knowledge more efficiently, and grasp knowledge structure systematically.", "title": "" }, { "docid": "26d20cd47dfd174ecb8606b460c1c040", "text": "In this article, we use an automated bottom-up approach to identify semantic categories in an entire corpus. We conduct an experiment using a word vector model to represent the meaning of words. The word vectors are then clustered, giving a bottom-up representation of semantic categories. Our main finding is that the likelihood of changes in a word’s meaning correlates with its position within its cluster.", "title": "" }, { "docid": "380807461bd4216e89013f65c0ff9334", "text": "The Optimal Power Flow (OPF) is an important criterion in today’s power system operation and control due to scarcity of energy resources, increasing power generation cost and ever growing demand for electric energy. As the size of the power system increases, load may be varying. The generators should share the total demand plus losses among themselves. The sharing should be based on the fuel cost of the total generation with respect to some security constraints. Conventional optimization methods that make use of derivatives and gradients are, in general, not able to locate or identify the global optimum. Heuristic algorithms such as genetic algorithms (GA) and evolutionary programming have been recently proposed for solving the OPF problem. Unfortunately, recent research has identified some deficiencies in GA performance. Recently, a new evolutionary computation technique, called Particle Swarm Optimization (PSO), has been proposed and introduced. This technique combines social psychology principles in socio-cognition human agents and evolutionary computations. In this paper, a novel PSO based approach is presented to solve Optimal Power Flow problem.", "title": "" }, { "docid": "4cf4dc8453a39668078d5ca9c6aafd63", "text": "Expert-curated Guides to the Best of CS Research", "title": "" }, { "docid": "87b67f9ed23c27a71b6597c94ccd6147", "text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.", "title": "" }, { "docid": "2c4cf66b08d26bb7e81515151c4f9e37", "text": "One of the major developments of the second year of human life is the emergence of the ability to pretend. A child's knowledge of a real situation is apparently contradicted and distorted by pretense. If, as generally assumed, the child is just beginning to construct a system for internally representing such knowledge, why is this system of representation not undermined by its use in both comprehend ing and producing pretense? In this article I present a theoretical analysis of the representational mechanism underlying this ability. This mechanism extends the power of the infant's existing capacity for (primary) representation, creating a capacity for metarepresentation. It is this, developing toward the end of infancy, that underlies the child's new abilities to pretend and to understand pretense in others. There is a striking isomorphism between the three fundamental forms of pretend play and three crucial logical properties of mental stale expressions in language. This isomorphism points to a common underlying form of internal representation that is here called metarepresentation. A performance model, the decoupler, is outlined embodying ideas about how an infant might compute the complex function postulated to underlie pretend play. This model also reveals pretense as an early manifestation of the ability to understand mental states. Aspects of later preschool development, both normal and abnormal, are discussed in the light of the new model. This theory begins the task of characterizing the specific innate basis of our commonsense “theory of mind.”", "title": "" }, { "docid": "7749fd32da3e853f9e9cfea74ddda5f8", "text": "This study describes the roles of architects in scaling agile frameworks with the help of a structured literature review. We aim to provide a primary analysis of 20 identified scaling agile frameworks. Subsequently, we thoroughly describe three popular scaling agile frameworks: Scaled Agile Framework, Large Scale Scrum, and Disciplined Agile 2.0. After specifying the main concepts of scaling agile frameworks, we characterize roles of enterprise, software, solution, and information architects, as identified in four scaling agile frameworks. Finally, we provide a discussion of generalizable findings on the role of architects in scaling agile frameworks.", "title": "" }, { "docid": "f267f73e9770184fbe617446ee4782c0", "text": "Juvenile dermatomyositis (JDM) is a rare, potentially life-threatening systemic autoimmune disease primarily affecting muscle and skin. Recent advances in the recognition, standardised assessment and treatment of JDM have been greatly facilitated by large collaborative research networks. Through these networks, a number of immunogenetic risk factors have now been defined, as well as a number of potential pathways identified in the aetio-pathogenesis of JDM. Myositis-associated and myositis-specific autoantibodies are helping to sub-phenotype JDM, defined by clinical features, outcomes and immunogenetic risk factors. Partially validated tools to assess disease activity and damage have assisted in standardising outcomes. Aggressive treatment approaches, including multiple initial therapies, as well as new drugs and biological therapies for refractory disease, offer promise of improved outcomes and less corticosteroid-related toxicity.", "title": "" }, { "docid": "27dc2972f39f613b08217c6b2486220b", "text": "Handwritten character recognition is always an interesting area of pattern recognition for research in the field of image processing. Many researchers have presented their work in this area and still research is undergoing to achieve high accuracy. This paper is mainly concerned for the people who are working on the character recognition and review of work to recognize handwritten character for various Indian languages. The objective of this paper is to describe the set of preprocessing, segmentation, feature extraction and classification techniques.", "title": "" }, { "docid": "22d8bfa59bb8e25daa5905dbb9e1deea", "text": "BACKGROUND\nSubacromial impingement syndrome (SAIS) is a painful condition resulting from the entrapment of anatomical structures between the anteroinferior corner of the acromion and the greater tuberosity of the humerus.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the short-term effectiveness of high-intensity laser therapy (HILT) versus ultrasound (US) therapy in the treatment of SAIS.\n\n\nDESIGN\nThe study was designed as a randomized clinical trial.\n\n\nSETTING\nThe study was conducted in a university hospital.\n\n\nPATIENTS\nSeventy patients with SAIS were randomly assigned to a HILT group or a US therapy group.\n\n\nINTERVENTION\nStudy participants received 10 treatment sessions of HILT or US therapy over a period of 2 consecutive weeks.\n\n\nMEASUREMENTS\nOutcome measures were the Constant-Murley Scale (CMS), a visual analog scale (VAS), and the Simple Shoulder Test (SST).\n\n\nRESULTS\nFor the 70 study participants (42 women and 28 men; mean [SD] age=54.1 years [9.0]; mean [SD] VAS score at baseline=6.4 [1.7]), there were no between-group differences at baseline in VAS, CMS, and SST scores. At the end of the 2-week intervention, participants in the HILT group showed a significantly greater decrease in pain than participants in the US therapy group. Statistically significant differences in change in pain, articular movement, functionality, and muscle strength (force-generating capacity) (VAS, CMS, and SST scores) were observed after 10 treatment sessions from the baseline for participants in the HILT group compared with participants in the US therapy group. In particular, only the difference in change of VAS score between groups (1.65 points) surpassed the accepted minimal clinically important difference for this tool.\n\n\nLIMITATIONS\nThis study was limited by sample size, lack of a control or placebo group, and follow-up period.\n\n\nCONCLUSIONS\nParticipants diagnosed with SAIS showed greater reduction in pain and improvement in articular movement functionality and muscle strength of the affected shoulder after 10 treatment sessions of HILT than did participants receiving US therapy over a period of 2 consecutive weeks.", "title": "" }, { "docid": "779a73da4551831f50b8705f3339b5e0", "text": "Android’s permission system offers an all-or-nothing choice when installing an app. To make it more flexible and fine-grained, users may choose a popular app tool, called permission manager, to selectively grant or revoke an app’s permissions at runtime. A fundamental requirement for such permission manager is that the granted or revoked permissions should be enforced faithfully. However, we discover that none of existing permission managers meet this requirement due to permission leaks, in which an unprivileged app can exercise certain permissions which are revoked or not-granted through communicating with a privileged app.To address this problem, we propose a secure, usable, and transparent OS-level middleware for any permission manager to defend against the permission leaks. The middleware is provably secure in a sense that it can effectively block all possible permission leaks.The middleware is designed to have a minimal impact on the usability of running apps. In addition, the middleware is transparent to users and app developers and it requires minor modifications on permission managers and Android OS. Finally, our evaluation shows that the middleware incurs relatively low performance overhead and power consumption.", "title": "" }, { "docid": "f86dfe07f73e2dba05796e6847765e7a", "text": "OBJECTIVE\nThe aim of this study was to extend previous examinations of aviation accidents to include specific aircrew, environmental, supervisory, and organizational factors associated with two types of commercial aviation (air carrier and commuter/ on-demand) accidents using the Human Factors Analysis and Classification System (HFACS).\n\n\nBACKGROUND\nHFACS is a theoretically based tool for investigating and analyzing human error associated with accidents and incidents. Previous research has shown that HFACS can be reliably used to identify human factors trends associated with military and general aviation accidents.\n\n\nMETHOD\nUsing data obtained from both the National Transportation Safety Board and the Federal Aviation Administration, 6 pilot-raters classified aircrew, supervisory, organizational, and environmental causal factors associated with 1020 commercial aviation accidents that occurred over a 13-year period.\n\n\nRESULTS\nThe majority of accident causal factors were attributed to aircrew and the environment, with decidedly fewer associated with supervisory and organizational causes. Comparisons were made between HFACS causal categories and traditional situational variables such as visual conditions, injury severity, and regional differences.\n\n\nCONCLUSION\nThese data will provide support for the continuation, modification, and/or development of interventions aimed at commercial aviation safety.\n\n\nAPPLICATION\nHFACS provides a tool for assessing human factors associated with accidents and incidents.", "title": "" } ]
scidocsrr
b6059999286f330c2ec16f2819293603
Dominant Resource Fairness: Fair Allocation of Multiple Resource Types
[ { "docid": "5ebefc9d5889cb9c7e3f83a8b38c4cb4", "text": "As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.", "title": "" }, { "docid": "4faa0200a566de5300fa028410c1756d", "text": "We propose and analyze a proportional share resource allocation algorithm for realizing real-time performance in time-shared operating systems. Processes are assigned a weight which determines a share (percentage) of the resource they are to receive. The resource is then allocated in discrete-sized time quanta in such a manner that each process makes progress at a precise, uniform rate. Proportional share allocation algorithms are of interest because (1) they provide a natural means of seamlessly integrating realand nonreal-time processing, (2) they are easy to implement, (3) they provide a simple and e ective means of precisely controlling the real-time performance of a process, and (4) they provide a natural mean of policing so that processes that use more of a resource than they request have no ill-e ect on well-behaved processes. We analyze our algorithm in the context of an idealized system in which a resource is assumed to be granted in arbitrarily small intervals of time and show that our algorithm guarantees that the di erence between the service time that a process should receive in the idealized system and the service time it actually receives in the real system is optimally bounded by the size of a Supported by GAANN fellowship. Dept. of CS, Old Dominion Univ., Norfolk, VA 23529-0162 (stoica@cs.odu.edu). ySupported by NSF grant CCR 95{9313857. Dept. of CS, Old Dominion Univ., Norfolk, VA 23529-0162 (wahab@cs.odu.edu). zSupported by grant from the IBM & Intel corps and NSF grant CCR 95{10156. Dpt. of CS, Univ. of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3175, (jeffay@cs.unc.edu). xSupported by NSF under Research Initiation Award CCR{ 9596282. Dept. of CS, Univ. of Vermont, Burlignton, VT 05405, (sanjoy@cs.uvm.edu). {Dpt. of CS, Univ. of Wisconsin-Madison, Madison, WI 53706-1685 (johannes@cs.wisc.edu). kSupported by NSF grant CCR{9504145, and the Texas Advanced Research Program under grant No. ARP{93{00365{461. Dpt. of CS, Univ. of Texas at Austin, Austin, TX 78712-1188 (plaxton@cs.utexas.edu). time quantum. In addition, the algorithm provides support for dynamic operations, such as processes joining or leaving the competition, and for both fractional and non-uniform time quanta. As a proof of concept we have implemented a prototype of a CPU scheduler under FreeBSD. The experimental results shows that our implementation performs within the theoretical bounds and hence supports real-time execution in a general purpose operating system.", "title": "" } ]
[ { "docid": "6677149025a415e44778d1011b617c36", "text": "In this paper controller synthesis based on standard and dynamic sliding modes for an uncertain nonlinear MIMO Three tank System is presented. Two types of sliding mode controllers are synthesized; first controller is based on standard first order sliding modes while second controller uses dynamic sliding modes. Sliding manifolds for both controllers are designed in-order to ensure finite time convergence of sliding variable for tracking the desired system trajectories. Simulation results are presented showing the performance analysis of both sliding mode controllers. Simulations are also carried out to assess the performance of dynamic sliding mode controller against parametric uncertainties / disturbances. A comparison of designed sliding mode controllers with LMI based robust H∞ controller is also discussed. The performance of dynamic sliding mode control in terms of response time, control effort and robustness of dynamic sliding mode controller is shown to be better than standard sliding mode controller and H∞ controllers.", "title": "" }, { "docid": "db25bafd722f5a491f5e48a133a2cd9c", "text": "Storytelling humankind’s universal choice for content transmission is becoming of great importance in the field of computer graphics, as the human ability to keep track of information in the information society of the 21 century is dependent on the quality of the information providing systems. Basically, the first steps towards storytelling systems have been taken; everyone today has the possibility to step into enfolding 3D worlds and become immersed in extensive loads of data. However, there is still a great backlog on the human-like organization of the associated data. The reason for this is the absence of the basic authoring systems for interactive storytelling. This position paper presents an approach to new authoring methods for interactive storytelling. It considers the author’s view of the tools to be used and introduces a coherent environment that does not restrict the creative process and lets the author feel comfortable, leading him to create well-narrated, interactive non-linear stories.", "title": "" }, { "docid": "cd0c68845416f111307ae7e14bfb7491", "text": "Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals' activity space. First, a survey was conducted to collect individuals' daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment.", "title": "" }, { "docid": "6c8151eee3fcfaec7da724c2a6899e8f", "text": "Classic work on interruptions by Zeigarnik showed that tasks that were interrupted were more likely to be recalled after a delay than tasks that were not interrupted. Much of the literature on interruptions has been devoted to examining this effect, although more recently interruptions have been used to choose between competing designs for interfaces to complex devices. However, none of this work looks at what makes some interruptions disruptive and some not. This series of experiments uses a novel computer-based adventure-game methodology to investigate the effects of the length of the interruption, the similarity of the interruption to the main task, and the complexity of processing demanded by the interruption. It is concluded that subjects make use of some form of nonarticulatory memory which is not affected by the length of the interruption. It is affected by processing similar material however, and by a complex mentalarithmetic task which makes large demands on working memory.", "title": "" }, { "docid": "a44c1d66db443d44850044b3b20a9cae", "text": "In this paper, a dual-polarized microstrip array antenna with orthogonal feed circuit is proposed. The proposed microstrip array antenna consists of a single substrate layer. The proposed array antenna has microstrip antenna elements, microstrip lines, air-bridges and cross slot lines. For dual polarization, an orthogonal feed circuit uses the Both-Sided MIC Technology including air-bridges. The Both-Sided MIC Technology is one of the useful MIC technologies for realizing a simple feed circuit. The air-bridges are often used for MMICs because it is possible to reduce the circuit complexity. The characteristics of proposed array antenna are investigated by both the simulation and the experiment. Consequently, it is confirmed that the proposed array antenna with the orthogonal feed circuit has dual polarization performance with very simple structure. The proposed array antenna will be a basic technology to realize high performance and attractive multifunction antennas.", "title": "" }, { "docid": "2a8c5de43ce73c360a5418709a504fa8", "text": "The INTERSPEECH 2018 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Atypical Affect Sub-Challenge, four basic emotions annotated in the speech of handicapped subjects have to be classified; in the Self-Assessed Affect Sub-Challenge, valence scores given by the speakers themselves are used for a three-class classification problem; in the Crying Sub-Challenge, three types of infant vocalisations have to be told apart; and in the Heart Beats Sub-Challenge, three different types of heart beats have to be determined. We describe the Sub-Challenges, their conditions, and baseline feature extraction and classifiers, which include data-learnt (supervised) feature representations by end-to-end learning, the ‘usual’ ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit for the first time in the challenge series.", "title": "" }, { "docid": "4d87ec21b7869f254584ebb4e789e1c2", "text": "This paper critically analyses the views of poverty adopted by different economic schools of thought which are relevant to the UK, as well as eclectic theories focused on social exclusion and social capital. We contend that each of the economic approaches has an important contribution to make to the understanding of poverty but that no theory is sufficient in itself; a selective synthesis is needed. Furthermore, economics by its nature omits important aspects of the nature and causes of poverty. The key points that follow from this analysis are:  The definitions of poverty adopted over time have reflected a shift in thinking from a focus on monetary aspects to wider issues such as political participation and social exclusion.  Classical economic traditions contend that individuals are ultimately responsible for poverty and accordingly provide a foundation for laissez faire policies. By contrast, Neoclassical (mainstream) economics is more diverse and can provide explanations for poverty, notably market failures, that are beyond individuals’ control.  Both schools centre on the role of incentives and individual productivity in generating poverty but perhaps overemphasise monetary aspects, the individual as opposed to the group, and a limited role for government. They tend to be averse to policies of redistribution.  Keynesian/neo-liberal schools, in contrast, focus on macroeconomic forces and emphasise the key role of government in providing not only economic stabilisation but also public goods. Poverty is considered largely involuntary and mainly caused by unemployment.  Marxian/radical views see the role of class and group discrimination, which are largely political issues, as central to poverty. These theories assign a central role to the state in its intervention/regulation of markets. Prominent examples of anti-poverty proposals in this vein include minimum wages and anti discriminatory laws.  Social exclusion and social capital theories recognise the role of social as well as economic factors in explaining poverty, giving them a similar weight. They offer a helpful contribution in understanding not only what the precursors of poverty are but also what underlies its persistence over time  A selective synthesis of approaches is needed to maximise the relevance of economic insights in poverty reduction; furthermore, there is a need for a broader and richer range of motivations for human behaviour beyond the key focus of economics on purely material and individualistic aspects, such as the maximisation of one’s own consumption less disutility of labour. This calls for an integrated approach that draws elements from other social disciplines such as political theory and sociology.  The analysis implies a number of policy recommendations, notably the need to focus on provision of forms of capital (including education) to aid the poor; anti discriminatory laws; community development; and policies to offset adverse incentives and market failures that underlie poverty.", "title": "" }, { "docid": "22ab8eb2b8eaafb2ee72ea0ed7148ca4", "text": "As travel is taking more significant part in our life, route recommendation service becomes a big business and attracts many major players in IT industry. Given a pair of user-specified origin and destination, a route recommendation service aims to provide users with the routes of best travelling experience according to criteria, such as travelling distance, travelling time, traffic condition, etc. However, previous research shows that even the routes recommended by the big-thumb service providers can deviate significantly from the routes travelled by experienced drivers. It means travellers' preferences on route selection are influenced by many latent and dynamic factors that are hard to model exactly with pre-defined formulas. In this work we approach this challenging problem with a very different perspective- leveraging crowds' knowledge to improve the recommendation quality. In this light, CrowdPlanner - a novel crowd-based route recommendation system has been developed, which requests human workers to evaluate candidate routes recommended by different sources and methods, and determine the best route based on their feedbacks. In this paper, we particularly focus on two important issues that affect system performance significantly: (1) how to efficiently generate tasks which are simple to answer but possess sufficient information to derive user-preferred routes; and (2) how to quickly identify a set of appropriate domain experts to answer the questions timely and accurately. Specifically, the task generation component in our system generates a series of informative and concise questions with optimized ordering for a given candidate route set so that workers feel comfortable and easy to answer. In addition, the worker selection component utilizes a set of selection criteria and an efficient algorithm to find the most eligible workers to answer the questions with high accuracy. A prototype system has been deployed to many voluntary mobile clients and extensive tests on real-scenario queries have shown the superiority of CrowdPlanner in comparison with the results given by map services and popular route mining algorithms.", "title": "" }, { "docid": "86788d61d590dec525d0bdf86c44add0", "text": "Algae are among the most potentially significant sources of sustainable biofuels in the future of renewable energy. A feedstock with virtually unlimited applicability, algae can metabolize various waste streams (e.g., municipal wastewater, carbon dioxide from industrial flue gas) and produce products with a wide variety of compositions and uses. These products include lipids, which can be processed into biodiesel; carbohydrates, which can be processed into ethanol; and proteins, which can be used for human and animal consumption. Algae are commonly genetically engineered to allow for advantageous process modification or optimization. However, issues remain regarding human exposure to algae-derived toxins, allergens, and carcinogens from both existing and genetically modified organisms (GMOs), as well as the overall environmental impact of GMOs. A literature review was performed to highlight issues related to the growth and use of algal products for generating biofuels. Human exposure and environmental impact issues are identified and discussed, as well as current research and development activities of academic, commercial, and governmental groups. It is hoped that the ideas contained in this paper will increase environmental awareness of issues surrounding the production of algae and will help the algae industry develop to its full potential.", "title": "" }, { "docid": "ff21965814f1277a0fedc0a2f74450d0", "text": "Email has become one of the fastest and most economical forms of communication. However, the increase of email users has resulted in the dramatic increase of spam emails during the past few years. As spammers always try to find a way to evade existing filters, new filters need to be developed to catch spam. Ontologies allow for machine-understandable semantics of data. It is important to share information with each other for more effective spam filtering. Thus, it is necessary to build ontology and a framework for efficient email filtering. Using ontology that is specially designed to filter spam, bunch of unsolicited bulk email could be filtered out on the system. Similar to other filters, the ontology evolves with the user requests. Hence the ontology would be customized for the user. This paper proposes to find an efficient spam email filtering method using adaptive ontology", "title": "" }, { "docid": "ce3cd1edffb0754e55658daaafe18df6", "text": "Fact finders in legal trials often need to evaluate a mass of weak, contradictory and ambiguous evidence. There are two general ways to accomplish this task: by holistically forming a coherent mental representation of the case, or by atomistically assessing the probative value of each item of evidence and integrating the values according to an algorithm. Parallel constraint satisfaction (PCS) models of cognitive coherence posit that a coherent mental representation is created by discounting contradicting evidence, inflating supporting evidence and interpreting ambivalent evidence in a way coherent with the emerging decision. This leads to inflated support for whichever hypothesis the fact finder accepts as true. Using a Bayesian network to model the direct dependencies between the evidence, the intermediate hypotheses and the main hypothesis, parameterised with (conditional) subjective probabilities elicited from the subjects, I demonstrate experimentally how an atomistic evaluation of evidence leads to a convergence of the computed posterior degrees of belief in the guilt of the defendant of those who convict and those who acquit. The atomistic evaluation preserves the inherent uncertainty that largely disappears in a holistic evaluation. Since the fact finders’ posterior degree of belief in the guilt of the defendant is the relevant standard of proof in many legal systems, this result implies that using an atomistic evaluation of evidence, the threshold level of posterior belief in guilt required for a conviction may often not be reached. ⃰ Max Planck Institute for Research on Collective Goods, Bonn", "title": "" }, { "docid": "8f9e7cdce0adb8c34edf8100e125d9f1", "text": "An approach to the coordinated sharing and interchange of computerized information is described emphasizing partial, controlled sharing among autonomous databases. Office information systems provide a particularly appropriate context for this type of information sharing and exchange. A federated database architecture is described in which a collection of independent database systems are united into a loosely coupled federation in order to share and exchange information. A federation consists of components (of which there may be any number) and a single federal dictionary. The components represent individual users, applications, workstations, or other components in an office information system. The federal dictionary is a specialized component that maintains the topology of the federation and oversees the entry of new components. Each component in the federation controls its interactions with other components by means of an export schema and an import schema. The export schema specifies the information that a component will share with other components, while the import schema specifies the nonlocal information that a component wishes to manipulate. The federated architecture provides mechanisms for sharing data, for sharing transactions (via message types) for combining information from several components, and for coordinating activities among autonomous components (via negotiation). A prototype implementation of the federated database mechanism is currently operational on an experimental basis.", "title": "" }, { "docid": "14440d5bac428cf5202aa7b7163cb6bc", "text": "Global energy consumption is projected to increase, even in the face of substantial declines in energy intensity, at least 2-fold by midcentury relative to the present because of population and economic growth. This demand could be met, in principle, from fossil energy resources, particularly coal. However, the cumulative nature of CO(2) emissions in the atmosphere demands that holding atmospheric CO(2) levels to even twice their preanthropogenic values by midcentury will require invention, development, and deployment of schemes for carbon-neutral energy production on a scale commensurate with, or larger than, the entire present-day energy supply from all sources combined. Among renewable energy resources, solar energy is by far the largest exploitable resource, providing more energy in 1 hour to the earth than all of the energy consumed by humans in an entire year. In view of the intermittency of insolation, if solar energy is to be a major primary energy source, it must be stored and dispatched on demand to the end user. An especially attractive approach is to store solar-converted energy in the form of chemical bonds, i.e., in a photosynthetic process at a year-round average efficiency significantly higher than current plants or algae, to reduce land-area requirements. Scientific challenges involved with this process include schemes to capture and convert solar energy and then store the energy in the form of chemical bonds, producing oxygen from water and a reduced fuel such as hydrogen, methane, methanol, or other hydrocarbon species.", "title": "" }, { "docid": "ba1cbd5fcd98158911f4fb6f677863f9", "text": "Classical approaches to clean data have relied on using integrity constraints, statistics, or machine learning. These approaches are known to be limited in the cleaning accuracy, which can usually be improved by consulting master data and involving experts to resolve ambiguity. The advent of knowledge bases KBs both general-purpose and within enterprises, and crowdsourcing marketplaces are providing yet more opportunities to achieve higher accuracy at a larger scale. We propose KATARA, a knowledge base and crowd powered data cleaning system that, given a table, a KB, and a crowd, interprets table semantics to align it with the KB, identifies correct and incorrect data, and generates top-k possible repairs for incorrect data. Experiments show that KATARA can be applied to various datasets and KBs, and can efficiently annotate data and suggest possible repairs.", "title": "" }, { "docid": "aecacf7d1ba736899f185ee142e32522", "text": "BACKGROUND\nLow rates of handwashing compliance among nurses are still reported in literature. Handwashing beliefs and attitudes were found to correlate and predict handwashing practices. However, such an important field is not fully explored in Jordan.\n\n\nOBJECTIVES\nThis study aims at exploring Jordanian nurses' handwashing beliefs, attitudes, and compliance and examining the predictors of their handwashing compliance.\n\n\nMETHODS\nA cross-sectional multicenter survey design was used to collect data from registered nurses and nursing assistants (N = 198) who were providing care to patients in governmental hospitals in Jordan. Data collection took place over 3 months during the period of February 2011 to April 2011 using the Handwashing Assessment Inventory.\n\n\nRESULTS\nParticipants' mean score of handwashing compliance was 74.29%. They showed positive attitudes but seemed to lack knowledge concerning handwashing. Analysis revealed a 5-predictor model, which accounted for 37.5% of the variance in nurses' handwashing compliance. Nurses' beliefs relatively had the highest prediction effects (β = .309, P < .01), followed by skin assessment (β = .290, P < .01).\n\n\nCONCLUSION\nJordanian nurses reported moderate handwashing compliance and were found to lack knowledge concerning handwashing protocols, for which education programs are recommended. This study raised the awareness regarding the importance of complying with handwashing protocols.", "title": "" }, { "docid": "052f074a8b43691009e1649bb6e378e1", "text": "Image forensics has now raised the anxiety of justice as increasing cases of abusing tampered images in newspapers and court for evidence are reported recently. With the goal of verifying image content authenticity, passive-blind image tampering detection is called for. More realistic open benchmark databases are also needed to assist the techniques. Recently, we collect a natural color image database with realistic tampering operations. The database is made publicly available for researchers to compare and evaluate their proposed tampering detection techniques. We call this database CASI-A Image Tampering Detection Evaluation Database. We describe the purpose, the design criterion, the organization and self-evaluation of this database in this paper.", "title": "" }, { "docid": "d1bd5406b31cec137860a73b203d6bef", "text": "A chemical-mechanical planarization (CMP) model based on lubrication theory is developed which accounts for pad compressibility, pad porosity and means of slurry delivery. Slurry ®lm thickness and velocity distributions between the pad and the wafer are predicted using the model. Two regimes of CMP operation are described: the lubrication regime (for ,40±70 mm slurry ®lm thickness) and the contact regime (for thinner ®lms). These regimes are identi®ed for two different pads using experimental copper CMP data and the predictions of the model. The removal rate correlation based on lubrication and mass transport theory agrees well with our experimental data in the lubrication regime. q 2000 Elsevier Science S.A. All rights reserved.", "title": "" }, { "docid": "bdffbc914108cb74c4130345e568e543", "text": "Early disease detection is a major challenge in agriculture field. Hence proper measures has to be taken to fight bioagressors of crops while minimizing the use of pesticides. The techniques ofmachine vision are extensively applied to agricultural science,and it has great perspective especially in the plant protection field,which ultimately leads to crops management. Our goal is early detection of bioagressors. The paper describes a software prototype system for pest detection on the infected images of different leaves. Images of the infected leaf are captured bydigital camera and processed using image growing, image segmentation techniques to detect infected parts of the particular plants.Then the detected part is been processed for futher feature extraction which gives general idea about pests. This proposes automatic detection and calculating area of infection on leaves of a whitefly (Trialeurodes vaporariorum Westwood) at a mature stage.", "title": "" }, { "docid": "bbc802e8653c6ae6cb643acc649de471", "text": "To overcome the power delivery limitations of batteries and energy storage limitations of ultracapacitors, hybrid energy storage systems, which combine the two energy sources, have been proposed. A comprehensive review of the state of the art is presented. In addition, a method of optimizing the operation of a battery/ultracapacitor hybrid energy storage system (HESS) is presented. The goal is to set the state of charge of the ultracapacitor and the battery in a way which ensures that the available power and energy is sufficient to supply the drivetrain. By utilizing an algorithm where the states of charge of both systems are tightly controlled, we allow for the overall system size to reduce since more power is available from a smaller energy storage system", "title": "" }, { "docid": "8de0f6d53158ce5ff9bc2ae269d7ee5e", "text": "Writing high-performance GPU implementations of graph algorithms can be challenging. In this paper, we argue that three optimizations called throughput optimizations are key to high-performance for this application class. These optimizations describe a large implementation space making it unrealistic for programmers to implement them by hand. \n To address this problem, we have implemented these optimizations in a compiler that produces CUDA code from an intermediate-level program representation called IrGL. Compared to state-of-the-art handwritten CUDA implementations of eight graph applications, code generated by the IrGL compiler is up to 5.95x times faster (median 1.4x) for five applications and never more than 30% slower for the others. Throughput optimizations contribute an improvement up to 4.16x (median 1.4x) to the performance of unoptimized IrGL code.", "title": "" } ]
scidocsrr
305b328755a9b446456c52a00c000c49
Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective
[ { "docid": "9f635d570b827d68e057afcaadca791c", "text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.", "title": "" }, { "docid": "f550f06ab3d8a13e6ae30454bc2812ac", "text": "Deep neural networks are powerful and popular learning models that achieve stateof-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when these systems are deployed in the real world. In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network. Our attacks treat the network as an oracle (blackbox) and only assume that the output of the network can be observed on the probed inputs. Our first attack is based on a simple idea of adding perturbation to a randomly selected single pixel or a small set of them. We then improve the effectiveness of this attack by carefully constructing a small set of pixels to perturb by using the idea of greedy local-search. Our proposed attacks also naturally extend to a stronger notion of misclassification. Our extensive experimental results illustrate that even these elementary attacks can reveal a deep neural network’s vulnerabilities. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test for designing robust networks.", "title": "" }, { "docid": "11a69c06f21e505b3e05384536108325", "text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "title": "" } ]
[ { "docid": "883191185d4671164eb4f12f19eb47f3", "text": "Lustre is a declarative, data-flow language, which is devoted to the specification of synchronous and real-time applications. It ensures efficient code generation and provides formal specification and verification facilities. A graphical tool dedicated to the development of critical embedded systems and often used by industries and professionals is SCADE (Safety Critical Application Development Environment). SCADE is a graphical environment based on the LUSTRE language and it allows the hierarchical definition of the system components and the automatic code generation. This research work is partially concerned with Lutess, a testing environment which automatically transforms formal specifications into test data generators.", "title": "" }, { "docid": "15e440bc952db5b0ad71617e509770b9", "text": "The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information.", "title": "" }, { "docid": "8182c4d6995d3a385219990f0b1909fa", "text": "Random forests are becoming increasingly popular in many scientific fields because they can cope with \"small n large p\" problems, complex interactions and even highly correlated predictor variables. Their variable importance measures have recently been suggested as screening tools for, e.g., gene expression studies. However, these variable importance measures show a bias towards correlated predictor variables. We identify two mechanisms responsible for this finding: (i) A preference for the selection of correlated predictors in the tree building process and (ii) an additional advantage for correlated predictor variables induced by the unconditional permutation scheme that is employed in the computation of the variable importance measure. Based on these considerations we develop a new, conditional permutation scheme for the computation of the variable importance measure. The resulting conditional variable importance reflects the true impact of each predictor variable more reliably than the original marginal approach.", "title": "" }, { "docid": "b53ee86e671ea8db6f9f84c8c02c2b5b", "text": "The accurate estimation of students’ grades in future courses is important as it can inform the selection of next term’s courses and create personalized degree pathways to facilitate successful and timely graduation. This paper presents future course grade predictions methods based on sparse linear and low-rank matrix factorization models that are specific to each course or student–course tuple. These methods identify the predictive subsets of prior courses on a course-by-course basis and better address problems associated with the not-missing-at-random nature of the student–course historical grade data. The methods were evaluated on a dataset obtained from the University of Minnesota, for two different departments with different characteristics. This evaluation showed that focusing on course-specific data improves the accuracy of grade prediction.", "title": "" }, { "docid": "1efd6da40ac525921b63257d9a3990be", "text": "Movie plot summaries are expected to reflect the genre of movies since many spectators read the plot summaries before deciding to watch a movie. In this study, we perform movie genre classification from plot summaries of movies using bidirectional LSTM (Bi-LSTM). We first divide each plot summary of a movie into sentences and assign the genre of corresponding movie to each sentence. Next, using the word representations of sentences, we train Bi-LSTM networks. We estimate the genres for each sentence separately. Since plot summaries generally contain multiple sentences, we use majority voting for the final decision by considering the posterior probabilities of genres assigned to sentences. Our results reflect that, training Bi-LSTM network after dividing the plot summaries into their sentences and fusing the predictions for individual sentences outperform training the network with the whole plot summaries with the limited amount of data. Moreover, employing Bi-LSTM performs better compared to basic Recurrent Neural Networks (RNNs) and Logistic Regression (LR) as a baseline.", "title": "" }, { "docid": "3f9bcd99eac46264ee0920ddcc866d33", "text": "The advent of easy to use blogging tools is increasing the number of bloggers leading to more diversity in the quality blogspace. The blog search technologies that help users to find “good” blogs are thus more and more important. This paper proposes a new algorithm called “EigenRumor” that scores each blog entry by weighting the hub and authority scores of the bloggers based on eigenvector calculations. This algorithm enables a higher score to be assigned to the blog entries submitted by a good blogger but not yet linked to by any other blogs based on acceptance of the blogger's prior work. General Terms Algorithms, Management, Experimentation", "title": "" }, { "docid": "e897ab9c0f9f850582fbcb172aa8b904", "text": "Facial expression recognition is in general a challenging problem, especially in the presence of weak expression. Most recently, deep neural networks have been emerging as a powerful tool for expression recognition. However, due to the lack of training samples, existing deep network-based methods cannot fully capture the critical and subtle details of weak expression, resulting in unsatisfactory results. In this paper, we propose Deeper Cascaded Peak-piloted Network (DCPN) for weak expression recognition. The technique of DCPN has three main aspects: (1) Peak-piloted feature transformation, which utilizes the peak expression (easy samples) to supervise the non-peak expression (hard samples) of the same type and subject; (2) the back-propagation algorithm is specially designed such that the intermediate-layer feature maps of non-peak expression are close to those of the corresponding peak expression; and (3) an novel integration training method, cascaded fine-tune, is proposed to prevent the network from overfitting. Experimental results on two popular facial expression databases, CK$$+$$ + and Oulu-CASIA, show the superiority of the proposed DCPN over state-of-the-art methods.", "title": "" }, { "docid": "21f6ca062098c0dcf04fe8fadfc67285", "text": "The Key study in this paper is to begin the investigation process with the initial forensic analysis in the segments of the storage media which would definitely contain the digital forensic evidences. These Storage media Locations is referred as the Windows registry. Identifying the forensic evidence from windows registry may take less time than required in the case of all locations of a storage media. Our main focus in this research will be to study the registry structure of Windows 7 and identify the useful information within the registry keys of windows 7 that may be extremely useful to carry out any task of digital forensic analysis. The main aim is to describe the importance of the study on computer & digital forensics. The Idea behind the research is to implement a forensic tool which will be very useful in extracting the digital evidences and present them in usable form to a forensic investigator. The work includes identifying various events registry keys value such as machine last shut down time along with machine name, List of all the wireless networks that the computer has connected to; List of the most recently used files or applications, List of all the USB devices that have been attached to the computer and many more. This work aims to point out the importance of windows forensic analysis to extract and identify the hidden information which shall act as an evidence tool to track and gather the user activities pattern. All Research was conducted in a Windows 7 Environment. Keywords—Windows Registry, Windows 7 Forensic Analysis, Windows Registry Structure, Analysing Registry Key, Digital Forensic Identification, Forensic data Collection, Examination of Windows Registry, Decoding of Windows Registry Keys, Discovering User Activities Patterns, Computer Forensic Investigation Tool.", "title": "" }, { "docid": "15b2279d218f0df5d496479644620846", "text": "Despite the proliferation of banking services, lending to industry and the public still constitutes the core of the income of commercial banks and other lending institutions in developed as well as post-transition countries. From the technical perspective, the lending process in general is a relatively straightforward series of actions involving two principal parties. These activities range from the initial loan application to the successful or unsuccessful repayment of the loan. Although retail lending belongs among the most profitable investments in lenders’ asset portfolios (at least in developed countries), increases in the amounts of loans also bring increases in the number of defaulted loans, i.e. loans that either are not repaid at all or cases in which the borrower has problems with paying debts. Thus, the primary problem of any lender is to differentiate between “good” and “bad” debtors prior to granting credit. Such differentiation is possible by using a credit-scoring method. The goal of this paper is to review credit-scoring methods and elaborate on their efficiency based on the examples from the applied research. Emphasis is placed on credit scoring related to retail loans. We survey the methods which are suitable for credit scoring in the retail segment. We focus on retail loans as sharp increase in the amounts of loans for this clientele has been recorded in the last few years and another increase can be expected. This dynamic is highly relevant for post-transition countries. In the last few years, banks in the Czech and Slovak Republics have allocated a significant part of their lending to retail clientele. In 2004 alone, Czech and Slovak banks recorded 33.8% and 36.7% increases in retail loans, respectively. Hilbers et al. (2005) review trends in bank lending to the private sector, with a particular focus on Central and Eastern European countries, and find that rapid growth of private sector credit continues to be a key challenge for most of these countries. In the Czech and Slovak Republics the financial liabilities of households formed 11 % and 9 %", "title": "" }, { "docid": "c4ccb674a07ba15417f09b81c1255ba8", "text": "Real world environments are characterized by high levels of linguistic and numerical uncertainties. A Fuzzy Logic System (FLS) is recognized as an adequate methodology to handle the uncertainties and imprecision available in real world environments and applications. Since the invention of fuzzy logic, it has been applied with great success to numerous real world applications such as washing machines, food processors, battery chargers, electrical vehicles, and several other domestic and industrial appliances. The first generation of FLSs were type-1 FLSs in which type-1 fuzzy sets were employed. Later, it was found that using type-2 FLSs can enable the handling of higher levels of uncertainties. Recent works have shown that interval type-2 FLSs can outperform type-1 FLSs in the applications which encompass high uncertainty levels. However, the majority of interval type-2 FLSs handle the linguistic and input numerical uncertainties using singleton interval type-2 FLSs that mix the numerical and linguistic uncertainties to be handled only by the linguistic labels type-2 fuzzy sets. This ignores the fact that if input numerical uncertainties were present, they should affect the incoming inputs to the FLS. Even in the papers that employed non-singleton type-2 FLSs, the input signals were assumed to have a predefined shape (mostly Gaussian or triangular) which might not reflect the real uncertainty distribution which can vary with the associated measurement. In this paper, we will present a new approach which is based on an adaptive non-singleton interval type-2 FLS where the numerical uncertainties will be modeled and handled by non-singleton type-2 fuzzy inputs and the linguistic uncertainties will be handled by interval type-2 fuzzy sets to represent the antecedents’ linguistic labels. The non-singleton type-2 fuzzy inputs are dynamic and they are automatically generated from data and they do not assume a specific shape about the distribution associated with the given sensor. We will present several real world experiments using a real world robot which will show how the proposed type-2 non-singleton type-2 FLS will produce a superior performance to its singleton type-1 and type-2 counterparts when encountering high levels of uncertainties.", "title": "" }, { "docid": "99ec846ba77110a1af12845cafdf115c", "text": "Planning information security investment is somewhere between art and science. This paper reviews and compares existing scientific approaches and discusses the relation between security investment models and security metrics. To structure the exposition, the high-level security production function is decomposed into two steps: cost of security is mapped to a security level, which is then mapped to benefits. This allows to structure data sources and metrics, to rethink the notion of security productivity, and to distinguish sources of indeterminacy as measurement error and attacker behavior. It is further argued that recently proposed investment models, which try to capture more features specific to information security, should be used for all strategic security investment decisions beneath defining the overall security budget.", "title": "" }, { "docid": "6acd1583b23a65589992c3297250a603", "text": "Trichostasis spinulosa (TS) is a common but rarely diagnosed disease. For diagnosis, it's sufficient to see a bundle of vellus hair located in a keratinous sheath microscopically. In order to obtain these vellus hair settled in comedone-like openings, Standard skin surface biopsy (SSSB), a non-invasive method was chosen. It's aimed to remind the differential diagnosis of TS in treatment-resistant open comedone-like lesions and discuss the SSSB method in diagnosis. A 25-year-old female patient was admitted with a complaint of the black spots located on bilateral cheeks and nose for 12 years. In SSSB, multiple vellus hair bundles in funnel-shaped structures were observed under the microscope, and a diagnosis of 'TS' was made. After six weeks of treatment with tretinoin 0.025% and 4% erythromycin jel topically, the appearance of black macules was significantly reduced. Treatment had to be terminated due to her pregnancy, and the lesions recurred within 1 month. It's believed that TS should be considered in the differential diagnosis of treatment-resistant open comedone-like lesions, and SSSB might be an inexpensive and effective alternative method for the diagnosis of TS.", "title": "" }, { "docid": "3afea784f4a9eb635d444a503266d7cd", "text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.", "title": "" }, { "docid": "47a484d75b1635139f899d2e1875d8f4", "text": "This work presents the concept and methodology as well as the architecture and physical implementation of an integrated node for smart-city applications. The presented integrated node lies on active RFID technology whereas the use case illustrated, with results from a small-scale verification of the presented node, refers to common-type waste-bins. The sensing units deployed for the use case are ultrasonic sensors that provide ranging information which is translated to fill-level estimations; however the use of a versatile active RFID tag within the node is able to afford multiple sensors for a variety of smart-city applications. The most important benefits of the presented node are power minimization, utilization of low-cost components and accurate fill-level estimation with a tiny data-load fingerprint, regarding the specific use case on waste-bins, whereas the node has to be deployed on public means of transportation or similar standard route vehicles within an urban or suburban context.", "title": "" }, { "docid": "352dbf516ba3cde1f1398cb5d75a76c1", "text": "We are building a `virtual-world' of a real world seabed for its visual analysis. Sub-bottom profile is imported in the 3D environment. “section-drilling” three-dimensional model is designed according to the characteristics of the multi-source comprehensive data under the seabed. In this model, the seabed stratigraphic profile obtained by seismic reflection is digitized into discrete points and interpolated with an approved Kriging arithmetic to produce uniform grid in every strata layer. The Delaunay triangular model is then constructed in every layer and calibrated using the drilling data to rectify the depth value of the dataset within the buffer. Finally, the constructed 3D seabed stratigraphic model is rendered in every layer by GPU shader engine. Based on this model, two state-of-the-art applications on website explorer and smartphone prove its ubiquitous feature. The resulting `3D Seabed' is used for simulation, visualization, and analysis, by a set of interlinked, real-time layers of information about the 3D Seabed and its analysis result.", "title": "" }, { "docid": "02321829f5adaec4811e15b9d46dc597", "text": "\"Be careful what you wish for, you just might get it.\" - Proverb In 2005, computing education was experiencing a crisis. Enrollments had \"fallen to such an extent that some academic computing programs were facing significant reductions in staffing levels or even elimination\". The community responded, with panels to investigate and highlight ways to infuse \"passion, beauty, joy and awe\" into the introductory experiences, the CS10K project to bring computing to 10,000 teachers and 100,000 students, and better messaging of career opportunities, to name a few of the initiatives to bring students back into our seats.\n Well, by golly, it worked! It certainly didn't hurt our cause that Wall Street almost collapsed, young whiz kids were becoming TECH billionaires, an inspiring video and an interactive website led millions of people to code for an hour every December, or smart devices put computing into the hands of young people, and social media became the killer app. Whatever it was, CS became hot again. And we mean HOT. There are now several institutions around the world that have well over a thousand students taking CS1 in the Fall of 2015. There's just so much lemonade one can make before the seams start to burst, and the wheels come off the bus, as many shared at SIGCSE 2015 at the Birds of the Feather session.\n The goal of this panel is to bring together educators who were charged with delivering face-to-face CS1 on the grandest scale the field has ever seen. How did they cope? Does it become all people management with an army of Teaching Assistants? What were the differences and common themes in their survival plans? What is working? What mistakes were made? How are they supporting differential learning for the students who don't have the same experience as others? How is diversity being affected? Finally, what advice would they have for others interested in venturing into the tsunami, and broaden participation at a massive scale?", "title": "" }, { "docid": "7a84328148fac2738d8954976b09aa45", "text": "The region was covered by 1:250 000 mapping by the Geological Survey of Canada during the mid 1940s (Lord, 1948). A number of showings were found. One of these, the Marmot, was the focus of the first modern exploration (1960s) in the general area. At the same time there was significant exploration activity for porphyry copper and molybdenum mineralization in the intrusive belt running north and south through the McConnell Range. A large gossan was discovered in 1966 at the present site of the Kemess North prospect and led to similar exploration on nearby ground. Falconbridge Nickel Ltd., during a reconnaissance helicopter flight in 1971, discovered a malachite-stained bed in the Sustut drainage that was traceable for over 2500 feet. Their assessment suggested a replacement copper deposi t hosted by volcaniclastic rocks in the upper part of the Takla Group. Numerous junior and major resource companies acquired ground in the area. In 1972 copper was found on the Willow cliffs on the opposite side of the Sustut River and a porphyry style target was identified at the Day. In 1973 the B.C. Geological Survey conducted a mineral deposit study of the Sustut copper area (Church, 1974a). The Geological Survey of Canada returned to pursue general and detailed studies within the McConnell sheet (Richards 1976, and Monger 1977). Monger and Church (1976) revised the stratigraphic nomenclature based on breaks and lithological changes in the volcanic succession supported by fossil data and field observations. In 1983, follow up of a gold-copper-molybdenum soil anomaly led to the discovery of the Kemess South porphyry deposit.", "title": "" }, { "docid": "627d5c8abee0b40c270b3de38ed84e80", "text": "Patients with temporal lobe epilepsy (TLE) often display cognitive deficits. However, current epilepsy therapeutic interventions mainly aim at how to reduce the frequency and degree of epileptic seizures. Recovery of cognitive impairment is not attended enough, resulting in the lack of effective approaches in this respect. In the pilocarpine-induced temporal lobe epilepsy rat model, memory impairment has been classically reported. Here we evaluated spatial cognition changes at different epileptogenesis stages in rats of this model and explored the effects of long-term Mozart music exposure on the recovery of cognitive ability. Our results showed that pilocarpine rats suffered persisting cognitive impairment during epileptogenesis. Interestingly, we found that Mozart music exposure can significantly enhance cognitive ability in epileptic rats, and music intervention may be more effective for improving cognitive function during the early stages after Status epilepticus. These findings strongly suggest that Mozart music may help to promote the recovery of cognitive damage due to seizure activities, which provides a novel intervention strategy to diminish cognitive deficits in TLE patients.", "title": "" }, { "docid": "80f6d8109c56b6573c3c0a9a3bc989f8", "text": "In coded aperture imaging the attainable quality of the reconstructed images strongly depends on the choice of the aperture pattern. Optimum mask patterns can be designed from binary arrays with constant sidelobes of their periodic autocorrelation function, the so{called URAs. However, URAs exist for a restricted number of aperture sizes and open fractions only. Using a mismatched lter decoding scheme, artifact{free reconstructions can be obtained even if the aperture array violates the URA condition. A general expression and an upper bound for the signal{to{noise ratio as a function of the aperture array and the relative detector noise level are derived. Combinatorial optimization algorithms, such as the Great Deluge algorithm, are employed for the design of near{optimum aperture arrays. The signal{to{noise ratio of the reconstructions is predicted to be only slightly inferior to the URA case while no restrictions with respect to the aperture size or open fraction are imposed.", "title": "" } ]
scidocsrr
35089915d9f374c0ceda5110b12bab24
History of cannabis as a medicine: a review.
[ { "docid": "3392de7e3182420e882617f0baff389a", "text": "BACKGROUND\nIndividuals who initiate cannabis use at an early age, when the brain is still developing, might be more vulnerable to lasting neuropsychological deficits than individuals who begin use later in life.\n\n\nMETHODS\nWe analyzed neuropsychological test results from 122 long-term heavy cannabis users and 87 comparison subjects with minimal cannabis exposure, all of whom had undergone a 28-day period of abstinence from cannabis, monitored by daily or every-other-day observed urine samples. We compared early-onset cannabis users with late-onset users and with controls, using linear regression controlling for age, sex, ethnicity, and attributes of family of origin.\n\n\nRESULTS\nThe 69 early-onset users (who began smoking before age 17) differed significantly from both the 53 late-onset users (who began smoking at age 17 or later) and from the 87 controls on several measures, most notably verbal IQ (VIQ). Few differences were found between late-onset users and controls on the test battery. However, when we adjusted for VIQ, virtually all differences between early-onset users and controls on test measures ceased to be significant.\n\n\nCONCLUSIONS\nEarly-onset cannabis users exhibit poorer cognitive performance than late-onset users or control subjects, especially in VIQ, but the cause of this difference cannot be determined from our data. The difference may reflect (1). innate differences between groups in cognitive ability, antedating first cannabis use; (2). an actual neurotoxic effect of cannabis on the developing brain; or (3). poorer learning of conventional cognitive skills by young cannabis users who have eschewed academics and diverged from the mainstream culture.", "title": "" } ]
[ { "docid": "ba452a03f619b7de7b37fe76bdb186e8", "text": "Device variability is receiving a lot of interest recently due to its important impact on the design of digital integrated systems. In analog integrated circuits, the variability of identically designed devices has long been a concern since it directly affects the attainable precision. This paper reviews the mismatch device models that are widely used in analog design as well as the fundamental impact of device mismatch on the trade-off between different performance parameters.", "title": "" }, { "docid": "4ae6afb7039936b2e6bcfc030fdb9cea", "text": "Apart from being used as a means of entertainment, computer games have been adopted for a long time as a valuable tool for learning. Computer games can offer many learning benefits to students since they can consume their attention and increase their motivation and engagement which can then lead to stimulate learning. However, most of the research to date on educational computer games, in particular learning versions of existing computer games, focused only on learner with typical development. Rather less is known about designing educational games for learners with special needs. The current research presents the results of a pilot study. The principal aim of this pilot study is to examine the interest of learners with hearing impairments in using an educational game for learning the sign language notation system SignWriting. The results found indicated that, overall, the application is useful, enjoyable and easy to use: the game can stimulate the students’ interest in learning such notations.", "title": "" }, { "docid": "e2ce393fade02f0dfd20b9aca25afd0f", "text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.", "title": "" }, { "docid": "39549cfe16eec5d4b083bf6a05c3d29f", "text": "Recently, there has been increasing interest in learning semantic parsers with indirect supervision, but existing work focuses almost exclusively on question answering. Separately, there have been active pursuits in leveraging databases for distant supervision in information extraction, yet such methods are often limited to binary relations and none can handle nested events. In this paper, we generalize distant supervision to complex knowledge extraction, by proposing the first approach to learn a semantic parser for extracting nested event structures without annotated examples, using only a database of such complex events and unannotated text. The key idea is to model the annotations as latent variables, and incorporate a prior that favors semantic parses containing known events. Experiments on the GENIA event extraction dataset show that our approach can learn from and extract complex biological pathway events. Moreover, when supplied with just five example words per event type, it becomes competitive even among supervised systems, outperforming 19 out of 24 teams that participated in the original shared task.", "title": "" }, { "docid": "b5788c52127d2ef06df428d758f1a225", "text": "Conventional convolutional neural networks use either a linear or a nonlinear filter to extract features from an image patch (region) of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> (typically, <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is small and is equal to <inline-formula> <tex-math notation=\"LaTeX\">$ W$ </tex-math></inline-formula>, e.g., <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is 5 or 7). Generally, the size of the filter is equal to the size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> of the input patch. We argue that the representational ability of equal-size strategy is not strong enough. To overcome the drawback, we propose to use subpatch filter whose spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> is smaller than <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula>. The proposed subpatch filter consists of two subsequent filters. The first one is a linear filter of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> and is aimed at extracting features from spatial domain. The second one is of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ 1\\times 1 $ </tex-math></inline-formula> and is used for strengthening the connection between different input feature channels and for reducing the number of parameters. The subpatch filter convolves with the input patch and the resulting network is called a subpatch network. Taking the output of one subpatch network as input, we further repeat constructing subpatch networks until the output contains only one neuron in spatial domain. These subpatch networks form a new network called the cascaded subpatch network (CSNet). The feature layer generated by CSNet is called the <italic>csconv</italic> layer. For the whole input image, we construct a deep neural network by stacking a sequence of <italic>csconv</italic> layers. Experimental results on five benchmark data sets demonstrate the effectiveness and compactness of the proposed CSNet. For example, our CSNet reaches a test error of 5.68% on the CIFAR10 data set without model averaging. To the best of our knowledge, this is the best result ever obtained on the CIFAR10 data set.", "title": "" }, { "docid": "2bd15d743690c8bcacb0d01650759d62", "text": "With the large amount of available data and the variety of features they offer, electronic health records (EHR) have gotten a lot of interest over recent years, and start to be widely used by the machine learning and bioinformatics communities. While typical numerical fields such as demographics, vitals, lab measurements, diagnoses and procedures, are natural to use in machine learning models, there is no consensus yet on how to use the free-text clinical notes. We show how embeddings can be learned from patients’ history of notes, at the word, note and patient level, using simple neural and sequence models. We show on various relevant evaluation tasks that these embeddings are easily transferable to smaller problems, where they enable accurate predictions using only clinical notes.", "title": "" }, { "docid": "cc9ee1b5111974da999d8c52ba393856", "text": "The back propagation (BP) neural network algorithm is a multi-layer feedforward network trained according to error back propagation algorithm and is one of the most widely applied neural network models. BP network can be used to learn and store a great deal of mapping relations of input-output model, and no need to disclose in advance the mathematical equation that describes these mapping relations. Its learning rule is to adopt the steepest descent method in which the back propagation is used to regulate the weight value and threshold value of the network to achieve the minimum error sum of square. This paper focuses on the analysis of the characteristics and mathematical theory of BP neural network and also points out the shortcomings of BP algorithm as well as several methods for improvement.", "title": "" }, { "docid": "8c2b0e93eae23235335deacade9660f0", "text": "We design and implement a simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.\n Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2-40 soundness error, the communication complexity is roughly 44KB (or less than 34KB under a plausible conjecture), the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016).\n The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For 2-40 soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. Our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously.\n Our zero-knowledge protocol is obtained by applying an optimized version of the general transformation of Ishai et al. (STOC 2007) to a variant of the protocol for secure multiparty computation of Damgard and Ishai (Crypto 2006). It can be viewed as a simple zero-knowledge interactive PCP based on \"interleaved\" Reed-Solomon codes.", "title": "" }, { "docid": "f4963c41832024b8cd7d3480204275fa", "text": "Almost surreptitiously, crowdsourcing has entered software engineering practice. In-house development, contracting, and outsourcing still dominate, but many development projects use crowdsourcing-for example, to squash bugs, test software, or gather alternative UI designs. Although the overall impact has been mundane so far, crowdsourcing could lead to fundamental, disruptive changes in how software is developed. Various crowdsourcing models have been applied to software development. Such changes offer exciting opportunities, but several challenges must be met for crowdsourcing software development to reach its potential.", "title": "" }, { "docid": "1420ad48fdba30ac37b176007c3945fa", "text": "Accurate and fast foreground object extraction is very important for object tracking and recognition in video surveillance. Although many background subtraction (BGS) methods have been proposed in the recent past, it is still regarded as a tough problem due to the variety of challenging situations that occur in real-world scenarios. In this paper, we explore this problem from a new perspective and propose a novel background subtraction framework with real-time semantic segmentation (RTSS). Our proposed framework consists of two components, a traditional BGS segmenter B and a real-time semantic segmenter S. The BGS segmenter B aims to construct background models and segments foreground objects. The realtime semantic segmenter S is used to refine the foreground segmentation outputs as feedbacks for improving the model updating accuracy. B and S work in parallel on two threads. For each input frame It, the BGS segmenter B computes a preliminary foreground/background (FG/BG) mask Bt. At the same time, the real-time semantic segmenter S extracts the object-level semantics St. Then, some specific rules are applied on Bt and St to generate the final detection Dt. Finally, the refined FG/BG mask Dt is fed back to update the background model. Comprehensive experiments evaluated on the CDnet 2014 dataset demonstrate that our proposed method achieves stateof-the-art performance among all unsupervised background subtraction methods while operating at real-time, and even performs better than some deep learning based supervised algorithms. In addition, our proposed framework is very flexible and has the potential for generalization.", "title": "" }, { "docid": "ae497143f2c1b15623ab35b360d954e5", "text": "With the popularity of social media (e.g., Facebook and Flicker), users could easily share their check-in records and photos during their trips. In view of the huge amount of check-in data and photos in social media, we intend to discover travel experiences to facilitate trip planning. Prior works have been elaborated on mining and ranking existing travel routes from check-in data. We observe that when planning a trip, users may have some keywords about preference on his/her trips. Moreover, a diverse set of travel routes is needed. To provide a diverse set of travel routes, we claim that more features of Places of Interests (POIs) should be extracted. Therefore, in this paper, we propose a Keyword-aware Skyline Travel Route (KSTR) framework that use knowledge extraction from historical mobility records and the user's social interactions. Explicitly, we model the \"Where, When, Who\" issues by featurizing the geographical mobility pattern, temporal influence and social influence. Then we propose a keyword extraction module to classify the POI-related tags automatically into different types, for effective matching with query keywords. We further design a route reconstruction algorithm to construct route candidates that fulfill the query inputs. To provide diverse query results, we explore Skyline concepts to rank routes. To evaluate the effectiveness and efficiency of the proposed algorithms, we have conducted extensive experiments on real location-based social network datasets, and the experimental results show that KSTR does indeed demonstrate good performance compared to state-of-the-art works.", "title": "" }, { "docid": "b1f000790b6ff45bd9b0b7ba3aec9cb2", "text": "Broad-scale destruction and fragmentation of native vegetation is a highly visible result of human land-use throughout the world (Chapter 4). From the Atlantic Forests of South America to the tropical forests of Southeast Asia, and in many other regions on Earth, much of the original vegetation now remains only as fragments amidst expanses of land committed to feeding and housing human beings. Destruction and fragmentation of habitats are major factors in the global decline of populations and species (Chapter 10), the modification of native plant and animal communities and the alteration of ecosystem processes (Chapter 3). Dealing with these changes is among the greatest challenges facing the “mission-orientated crisis discipline” of conservation biology (Soulé 1986; see Chapter 1). Habitat fragmentation, by definition, is the “breaking apart” of continuous habitat, such as tropical forest or semi-arid shrubland, into distinct pieces. When this occurs, three interrelated processes take place: a reduction in the total amount of the original vegetation (i.e. habitat loss); subdivision of the remaining vegetation into fragments, remnants or patches (i.e. habitat fragmentation); and introduction of new forms of land-use to replace vegetation that is lost. These three processes are closely intertwined such that it is often difficult to separate the relative effect of each on the species or community of concern. Indeed, many studies have not distinguished between these components, leading to concerns that “habitat fragmentation” is an ambiguous, or even meaningless, concept (Lindenmayer and Fischer 2006). Consequently, we use “landscape change” to refer to these combined processes and “habitat fragmentation” for issues directly associated with the subdivision of vegetation and its ecological consequences. This chapter begins by summarizing the conceptual approaches used to understand conservation in fragmented landscapes. We then examine the biophysical aspects of landscape change, and how such change affects species and communities, posing two main questions: (i) what are the implications for the patterns of occurrence of species and communities?; and (ii) how does landscape change affect processes that influence the distribution and viability of species and communities? The chapter concludes by identifying the kinds of actions that will enhance the conservation of biota in fragmented landscapes.", "title": "" }, { "docid": "52e1c954aefca110d15c24d90de902b2", "text": "Reinforcement learning (RL) agents can benefit from adaptive exploration/exploitation behavior, especially in dynamic environments. We focus on regulating this exploration/exploitation behavior by controlling the action-selection mechanism of RL. Inspired by psychological studies which show that affect influences human decision making, we use artificial affect to influence an agent’s action-selection. Two existing affective strategies are implemented and, in addition, a new hybrid method that combines both. These strategies are tested on ‘maze tasks’ in which a RL agent has to find food (rewarded location) in a maze. We use Soar-RL, the new RL-enabled version of Soar, as a model environment. One task tests the ability to quickly adapt to an environmental change, while the other tests the ability to escape a local optimum in order to find the global optimum. We show that artificial affect-controlled action-selection in some cases helps agents to faster adapt to changes in the environment.", "title": "" }, { "docid": "73bbb7122b588761f1bf7b711f21a701", "text": "This research attempts to find a new closed-form solution of toroid and overlapping windings for axial flux permanent magnet machines. The proposed solution includes analytical derivations for winding lengths, resistances, and inductances as functions of fundamental airgap flux density and inner-to-outer diameter ratio. Furthermore, phase back-EMFs, phase terminal voltages, and efficiencies are calculated and compared for both winding types. Finite element analysis is used to validate the accuracy of the proposed analytical calculations. The proposed solution should assist machine designers to ascertain benefits and limitations of toroid and overlapping winding types as well as to get faster results.", "title": "" }, { "docid": "611eacd767f1ea709c1c4aca7acdfcdb", "text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.", "title": "" }, { "docid": "abdd0d2c13c884b22075b2c3f54a0dfc", "text": "Global clock distribution for multi-GHz microprocessors has become increasingly difficult and time-consuming to design. As the frequency of the global clock continues to increase, the timing uncertainty introduced by the clock network − the skew and jitter − must reduce proportional to the clock period. However, the clock skew and jitter for conventional, buffered H-trees are proportional to latency, which has increased for recent generations of microprocessors. A global clock network that uses standing waves and coupled oscillators has the potential to significantly reduce both skew and jitter. Standing waves have the unique property that phase does not depend on position, meaning that there is ideally no skew. They have previously been used for board-level clock distribution, on coaxial cables, and on superconductive wires but have never been implemented on-chip due to the large losses of on-chip interconnects. Networks of coupled oscillators have a phase-averaging effect that reduces both skew and jitter. However, none of the previous implementations of coupled-oscillator clock networks use standing waves and some require considerable circuitry to couple the oscillators. In this thesis, a global clock network that incorporates standing waves and coupled oscillators to distribute a high-frequency clock signal with low skew and low jitter is", "title": "" }, { "docid": "a0f46c67118b2efec2bce2ecd96d11d6", "text": "This paper describes the implementation of a service to identify and geo-locate real world events that may be present as social activity signals in two different social networks. Specifically, we focus on content shared by users on Twitter and Instagram in order to design a system capable of fusing data across multiple networks. Past work has demonstrated that it is indeed possible to detect physical events using various social network platforms. However, many of these signals need corroboration in order to handle events that lack proper support within a single network. We leverage this insight to design an unsupervised approach that can correlate event signals across multiple social networks. Our algorithm can detect events and identify the location of the event occurrence. We evaluate our algorithm using both simulations and real world datasets collected using Twitter and Instagram. The results indicate that our algorithm significantly improves false positive elimination and attains high precision compared to baseline methods on real world datasets.", "title": "" }, { "docid": "03ec20a448dc861d8ba8b89b0963d52d", "text": "Social Web 2.0 features have become a vital component in a variety of multimedia systems, e.g., YouTube and Last.fm. Interestingly, adult video websites are also starting to adopt these Web 2.0 principles, giving rise to the term “Porn 2.0”. This paper examines a large Porn 2.0 social network, through data covering 563k users. We explore a number of unusual behavioural aspects that set this apart from more traditional multimedia social networks. We particularly focus on the role of gender and sexuality, to understand how these different groups behave. A number of key differences are discovered relating to social demographics, modalities of interaction and content consumption habits, shedding light on this understudied area of online activity.", "title": "" }, { "docid": "96a38b8b6286169cdd98aa6778456e0c", "text": "Data mining is on the interface of Computer Science andStatistics, utilizing advances in both disciplines to make progressin extracting information from large databases. It is an emergingfield that has attracted much attention in a very short period oftime. This article highlights some statistical themes and lessonsthat are directly relevant to data mining and attempts to identifyopportunities where close cooperation between the statistical andcomputational communities might reasonably provide synergy forfurther progress in data analysis.", "title": "" }, { "docid": "25d25da610b4b3fe54b665d55afc3323", "text": "We address the problem of vision-based navigation in busy inner-city locations, using a stereo rig mounted on a mobile platform. In this scenario semantic information becomes important: rather than modelling moving objects as arbitrary obstacles, they should be categorised and tracked in order to predict their future behaviour. To this end, we combine classical geometric world mapping with object category detection and tracking. Object-category specific detectors serve to find instances of the most important object classes (in our case pedestrians and cars). Based on these detections, multi-object tracking recovers the objects’ trajectories, thereby making it possible to predict their future locations, and to employ dynamic path planning. The approach is evaluated on challenging, realistic video sequences recorded at busy inner-city locations.", "title": "" } ]
scidocsrr
016bf355adcc396c31dacc83da145b0e
Personality as a predictor of Business Social Media Usage: an Empirical Investigation of Xing Usage Patterns
[ { "docid": "627b14801c8728adf02b75e8eb62896f", "text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.", "title": "" }, { "docid": "ee6d70f4287f1b43e1c36eba5f189523", "text": "Received: 10 March 2008 Revised: 31 May 2008 2nd Revision: 27 July 2008 Accepted: 11 August 2008 Abstract For more than a century, concern for privacy (CFP) has co-evolved with advances in information technology. The CFP refers to the anxious sense of interest that a person has because of various types of threats to the person’s state of being free from intrusion. Research studies have validated this concept and identified its consequences. For example, research has shown that the CFP can have a negative influence on the adoption of information technology; but little is known about factors likely to influence such concern. This paper attempts to fill that gap. Because privacy is said to be a part of a more general ‘right to one’s personality’, we consider the so-called ‘Big Five’ personality traits (agreeableness, extraversion, emotional stability, openness to experience, and conscientiousness) as factors that can influence privacy concerns. Protection motivation theory helps us to explain this influence in the context of an emerging pervasive technology: location-based services. Using a survey-based approach, we find that agreeableness, conscientiousness, and openness to experience each affect the CFP. These results have implications for the adoption, the design, and the marketing of highly personalized new technologies. European Journal of Information Systems (2008) 17, 387–402. doi:10.1057/ejis.2008.29", "title": "" }, { "docid": "5a5fbde8e0e264410fe23322a9070a39", "text": "By asking users of career-oriented social networking sites I investigated their job search behavior. For further IS-theorizing I integrated the number of a user's contacts as an own construct into Venkatesh's et al. UTAUT2 model, which substantially rose its predictive quality from 19.0 percent to 80.5 percent concerning the variance of job search success. Besides other interesting results I found a substantial negative relationship between the number of contacts and job search success, which supports the experience of practitioners but contradicts scholarly findings. The results are useful for scholars and practitioners.", "title": "" } ]
[ { "docid": "a4f2a82daf86314363ceeac34cba7ed9", "text": "As a vital task in natural language processing, relation classification aims to identify relation types between entities from texts. In this paper, we propose a novel Att-RCNN model to extract text features and classify relations by combining recurrent neural network (RNN) and convolutional neural network (CNN). This network structure utilizes RNN to extract higher level contextual representations of words and CNN to obtain sentence features for the relation classification task. In addition to this network structure, both word-level and sentence-level attention mechanisms are employed in Att-RCNN to strengthen critical words and features to promote the model performance. Moreover, we conduct experiments on four distinct datasets: SemEval-2010 task 8, SemEval-2018 task 7 (two subtask datasets), and KBP37 dataset. Compared with the previous public models, Att-RCNN has the overall best performance and achieves the highest $F_{1}$ score, especially on the KBP37 dataset.", "title": "" }, { "docid": "fcf2fd920ac463e505e68aa02baef795", "text": "Channel modeling is a critical topic when considering designing, learning, or evaluating the performance of any communications system. Most prior work in designing or learning new modulation schemes has focused on using highly simplified analytic channel models such as additive white Gaussian noise (AWGN), Rayleigh fading channels or similar. Recently, we proposed the usage of a generative adversarial networks (GANs) to jointly approximate a wireless channel response model (e.g. from real black box measurements) and optimize for an efficient modulation scheme over it using machine learning. This approach worked to some degree, but was unable to produce accurate probability distribution functions (PDFs) representing the stochastic channel response. In this paper, we focus specifically on the problem of accurately learning a channel PDF using a variational GAN, introducing an architecture and loss function which can accurately capture stochastic behavior. We illustrate where our prior method failed and share results capturing the performance of such as system over a range of realistic channel distributions.", "title": "" }, { "docid": "681b46b159c7b5df2b1bf99e9f0064fd", "text": "Purpose – The purpose of this paper is to examine the factors within the technology-organization-environment (TOE) framework that affect the decision to adopt electronic commerce (EC) and extent of EC adoption, as well as adoption and non-adoption of different EC applications within smalland medium-sized enterprises (SMEs). Design/methodology/approach – A questionnaire-based survey was conducted to collect data from 235 managers or owners of manufacturing SMEs in Iran. The data were analyzed by employing factorial analysis and relevant hypotheses were derived and tested by multiple and logistic regression analysis. Findings – EC adoption within SMEs is affected by perceived relative advantage, perceived compatibility, CEO’s innovativeness, information intensity, buyer/supplier pressure, support from technology vendors, and competition. Similarly, description on determinants of adoption and non-adoption of different EC applications has been provided. Research limitations/implications – Cross-sectional data of this research tend to have certain limitations when it comes to explaining the direction of causality of the relationships among the variables, which will change overtime. Practical implications – The findings offer valuable insights to managers, IS experts, and policy makers responsible for assisting SMEs with entering into the e-marketplace. Vendors should collaborate with SMEs to enhance the compatibility of EC applications with these businesses. To enhance the receptiveness of EC applications, CEOs, innovativeness and perception toward EC advantages should also be aggrandized. Originality/value – This study is perhaps one of the first to use a wide range of variables in the light of TOE framework to comprehensively assess EC adoption behavior, both in terms of initial and post-adoption within SMEs in developing countries, as well adoption and non-adoption of simple and advanced EC applications such as electronic supply chain management systems.", "title": "" }, { "docid": "fdd790d33300c19cb0c340903e503b02", "text": "We present a simple method for evergrowing extraction of predicate paraphrases from news headlines in Twitter. Analysis of the output of ten weeks of collection shows that the accuracy of paraphrases with different support levels is estimated between 60-86%. We also demonstrate that our resource is to a large extent complementary to existing resources, providing many novel paraphrases. Our resource is publicly available, continuously expanding based on daily news.", "title": "" }, { "docid": "8a35d871317a372445a5f25eb7610e77", "text": "Wireless Sensor Networks (WSNs) have their own unique nature of distributed resources and dynamic topology. This introduces very special requirements that should be met by the proposed routing protocols for the WSNs. A Wireless Sensor Network routing protocol is a standard which controls the number of nodes that come to an agreement about the way to route packets between all the computing devices in mobile wireless networks. Today, wireless networks are becoming popular and many routing protocols have been proposed in the literature. Considering these protocols we made a survey on the WSNs energy-efficient routing techniques which are used for Health Care Communication Systems concerning especially the Flat Networks Protocols that have been developed in recent years. Then, as related work, we discuss each of the routing protocols belonging to this category and conclude with a comparison of them.", "title": "" }, { "docid": "6e993c4f537dfb8c73980dd56aead6d8", "text": "A novel compact 4 × 4 Butler matrix using only microstrip couplers and a crossover is proposed in this letter. Compared with the conventional Butler matrix, the proposed one avoids the interconnecting mismatch loss and imbalanced amplitude introduced by the phase shifter. The measurements show accurate phase differences of 45±0.8° and -135±0.9° with an amplitude imbalance less than 0.4 dB. The 10 dB return loss bandwidth is 20.1%.", "title": "" }, { "docid": "ffef3f247f0821eee02b8d8795ddb21c", "text": "A broadband polarization reconfigurable rectenna is proposed, which can operate in three polarization modes. The receiving antenna of the rectenna is a polarization reconfigurable planar monopole antenna. By installing switches on the feeding network, the antenna can switch to receive electromagnetic (EM) waves with different polarizations, including linear polarization (LP), right-hand and left-hand circular polarizations (RHCP/LHCP). To achieve stable conversion efficiency of the rectenna (nr) in all the modes within a wide frequency band, a tunable matching network is inserted between the rectifying circuit and the antenna. The measured nr changes from 23.8% to 31.9% in the LP mode within 5.1-5.8 GHz and from 22.7% to 24.5% in the CP modes over 5.8-6 GHz. Compared to rectennas with conventional broadband matching network, the proposed rectenna exhibits more stable conversion efficiency.", "title": "" }, { "docid": "f26d34a762ce2c8ffd1c92ec0a86d56a", "text": "Despite recent interest in digital fabrication, there are still few algorithms that provide control over how light propagates inside a solid object. Existing methods either work only on the surface or restrict themselves to light diffusion in volumes. We use multi-material 3D printing to fabricate objects with embedded optical fibers, exploiting total internal reflection to guide light inside an object. We introduce automatic fiber design algorithms together with new manufacturing techniques to route light between two arbitrary surfaces. Our implicit algorithm optimizes light transmission by minimizing fiber curvature and maximizing fiber separation while respecting constraints such as fiber arrival angle. We also discuss the influence of different printable materials and fiber geometry on light propagation in the volume and the light angular distribution when exiting the fiber. Our methods enable new applications such as surface displays of arbitrary shape, touch-based painting of surfaces, and sensing a hemispherical light distribution in a single shot.", "title": "" }, { "docid": "441a6a879e0723c00f48796fd4bb1a91", "text": "Recent research on Low Power Wide Area Network (LPWAN) technologies which provide the capability of serving massive low power devices simultaneously has been very attractive. The LoRaWAN standard is one of the most successful developments. Commercial pilots are seen in many countries around the world. However, the feasibility of large scale deployments, for example, for smart city applications need to be further investigated. This paper provides a comprehensive case study of LoRaWAN to show the feasibility, scalability, and reliability of LoRaWAN in realistic simulated scenarios, from both technical and economic perspectives. We develop a Matlab based LoRaWAN simulator to offer a software approach of performance evaluation. A practical LoRaWAN network covering Greater London area is implemented. Its performance is evaluated based on two typical city monitoring applications. We further present an economic analysis and develop business models for such networks, in order to provide a guideline for commercial network operators, IoT vendors, and city planners to investigate future deployments of LoRaWAN for smart city applications.", "title": "" }, { "docid": "bee35be37795d274dfbb185036fb8ae9", "text": "This paper presents a human--machine interface to control exoskeletons that utilizes electrical signals from the muscles of the operator as the main means of information transportation. These signals are recorded with electrodes attached to the skin on top of selected muscles and reflect the activation of the observed muscle. They are evaluated by a sophisticated but simplified biomechanical model of the human body to derive the desired action of the operator. A support action is computed in accordance to the desired action and is executed by the exoskeleton. The biomechanical model fuses results from different biomechanical and biomedical research groups and performs a sensible simplification considering the intended application. Some of the model parameters reflect properties of the individual human operator and his or her current body state. A calibration algorithm for these parameters is presented that relies exclusively on sensors mounted on the exoskeleton. An exoskeleton for knee joint support was designed and constructed to verify the model and to investigate the interaction between operator and machine in experiments with force support during everyday movements.", "title": "" }, { "docid": "631b6c1bce729a25c02f499464df7a4f", "text": "Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevant sentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences.", "title": "" }, { "docid": "5dad2c804c4718b87ae6ee9d7cc5a054", "text": "The masquerade attack, where an attacker takes on the identity of a legitimate user to maliciously utilize that user’s privileges, poses a serious threat to the security of information systems. Such attacks completely undermine traditional security mechanisms due to the trust imparted to user accounts once they have been authenticated. Many attempts have been made at detecting these attacks, yet achieving high levels of accuracy remains an open challenge. In this paper, we discuss the use of a specially tuned sequence alignment algorithm, typically used in bioinformatics, to detect instances of masquerading in sequences of computer audit data. By using the alignment algorithm to align sequences of monitored audit data with sequences known to have been produced by the user, the alignment algorithm can discover areas of similarity and derive a metric that indicates the presence or absence of masquerade attacks. Additionally, we present several scoring systems, methods for accommodating variations in user behavior, and heuristics for decreasing the computational requirements of the algorithm. Our technique is evaluated against the standard masquerade detection dataset provided by Schonlau et al. [14, 13], and the results show that the use of the sequence alignment technique provides, to our knowledge, the best results of all masquerade detection techniques to date.", "title": "" }, { "docid": "3157970218dc3761576345c0e01e3121", "text": "This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu", "title": "" }, { "docid": "e79abaaa50d8ab8938f1839c7e4067f9", "text": "We review the objectives and techniques used in the control of horizontal axis wind turbines at the individual turbine level, where controls are applied to the turbine blade pitch and generator. The turbine system is modeled as a flexible structure operating in the presence of turbulent wind disturbances. Some overview of the various stages of turbine operation and control strategies used to maximize energy capture in below rated wind speeds is given, but emphasis is on control to alleviate loads when the turbine is operating at maximum power. After reviewing basic turbine control objectives, we provide an overview of the common basic linear control approaches and then describe more advanced control architectures and why they may provide significant advantages.", "title": "" }, { "docid": "c99f6ba5851e497206d444d0780a3ef0", "text": "Digital backchannel systems have been proven useful to help a lecturer gather real-time online feedback from students in a lecture environment. However, the large number of posts made during a lecture creates a major hurdle for the lecturer to promptly analyse them and take actions accordingly in time. To tackle this problem, we propose a solution that analyses the sentiment of students' feedback and visualises the morale trend of the student population to the lecturer in real time. In this paper, we present the user interface for morale visualisation and playback of ranked posts as well as the techniques for sentiment analysis and morale computation.", "title": "" }, { "docid": "a10b0a69ba7d3f902590b35cf0d5ea32", "text": "This article distills insights from historical, sociological, and psychological perspectives on marriage to develop the suffocation model of marriage in America. According to this model, contemporary Americans are asking their marriage to help them fulfill different sets of goals than in the past. Whereas they ask their marriage to help them fulfill their physiological and safety needs much less than in the past, they ask it to help them fulfill their esteem and self-actualization needs much more than in the past. Asking the marriage to help them fulfill the latter, higher level needs typically requires sufficient investment of time and psychological resources to ensure that the two spouses develop a deep bond and profound insight into each other’s essential qualities. Although some spouses are investing sufficient resources—and reaping the marital and psychological benefits of doing so—most are not. Indeed, they are, on average, investing less than in the past. As a result, mean levels of marital quality and personal well-being are declining over time. According to the suffocation model, spouses who are struggling with an imbalance between what they are asking from their marriage and what they are investing in it have several promising options for corrective action: intervening to optimize their available resources, increasing their investment of resources in the marriage, and asking less of the marriage in terms of facilitating the fulfillment of spouses’ higher needs. Discussion explores the implications of the suffocation model for understanding dating and courtship, sociodemographic variation, and marriage beyond American’s borders.", "title": "" }, { "docid": "72bbd468c00ae45979cce3b771e4c2bf", "text": "Twitter is a popular microblogging and social networking service with over 100 million users. Users create short messages pertaining to a wide variety of topics. Certain topics are highlighted by Twitter as the most popular and are known as “trending topics.” In this paper, we will outline methodologies of detecting and identifying trending topics from streaming data. Data from Twitter’s streaming API will be collected and put into documents of equal duration. Data collection procedures will allow for analysis over multiple timespans, including those not currently associated with Twitter-identified trending topics. Term frequency-inverse document frequency analysis and relative normalized term frequency analysis are performed on the documents to identify the trending topics. Relative normalized term frequency analysis identifies unigrams, bigrams, and trigrams as trending topics, while term frequcny-inverse document frequency analysis identifies unigrams as trending topics.", "title": "" }, { "docid": "753c52924fadee65697f09d00b4bb187", "text": "Although labelled graphical, many modelling languages represent important model parts as structured text. We benefit from sophisticated text editors when we use programming languages, but we neglect the same technology when we edit the textual parts of graphical models. Recent advances in generative engineering of textual model editors make the development of such sophisticated text editors practical, even for the smallest textual constructs of graphical languages. In this paper, we present techniques to embed textual model editors into graphical model editors and prove our approach for EMF-based textual editors and graphical editors created with GMF.", "title": "" }, { "docid": "86e0c7b70de40fcd5179bf3ab67bc3a4", "text": "The development of a scale to assess drug and other treatment effects on severely mentally retarded individuals was described. In the first stage of the project, an initial scale encompassing a large number of behavior problems was used to rate 418 residents. The scale was then reduced to an intermediate version, and in the second stage, 509 moderately to profoundly retarded individuals were rated. Separate factor analyses of the data from the two samples resulted in a five-factor scale comprising 58 items. The factors of the Aberrant Behavior Checklist have been labeled as follows: (I) Irritability, Agitation, Crying; (II) Lethargy, Social Withdrawal; (III) Stereotypic Behavior; (IV) Hyperactivity, Noncompliance; and (V) Inappropriate Speech. Average subscale scores were presented for the instrument, and the results were compared with empirically derived rating scales of childhood psychopathology and with factor analytic work in the field of mental retardation.", "title": "" } ]
scidocsrr
5670049dcde17f5d1da6ae1f846d2c42
A Meta-Analysis of the Effects of Calculators on Students ’ Achievement and Attitude Levels in Precollege Mathematics Classes
[ { "docid": "d61fde7b9c5a2c17c7cafb44aa51fe9b", "text": "868 NOTICES OF THE AMS VOLUME 47, NUMBER 8 In April 2000 the National Council of Teachers of Mathematics (NCTM) released Principles and Standards for School Mathematics—the culmination of a multifaceted, three-year effort to update NCTM’s earlier standards documents and to set forth goals and recommendations for mathematics education in the prekindergarten-through-grade-twelve years. As the chair of the Writing Group, I had the privilege to interact with all aspects of the development and review of this document and with the committed groups of people, including the members of the Writing Group, who contributed immeasurably to this process. This article provides some background about NCTM and the standards, the process of development, efforts to gather input and feedback, and ways in which feedback from the mathematics community influenced the document. The article concludes with a section that provides some suggestions for mathematicians who are interested in using Principles and Standards.", "title": "" }, { "docid": "d34be0ce0f9894d6e219d12630166308", "text": "The need for curricular reform in K-4 mathematics is clear. Such reform must address both the content and emphasis of the curriculum as well as approaches to instruction. A longstanding preoccupation with computation and other traditional skills has dominated both what mathematics is taught and the way mathematics is taught at this level. As a result, the present K-4 curriculum is narrow in scope; fails to foster mathematical insight, reasoning, and problem solving; and emphasizes rote activities. Even more significant is that children begin to lose their belief that learning mathematics is a sense-making experience. They become passive receivers of rules and procedures rather than active participants in creating knowledge.", "title": "" } ]
[ { "docid": "0db1caadc1f568ceaeafa6f063bf013b", "text": "The modern musician enjoys access to a staggering number of audio samples. Composition software can ship with many gigabytes of data, and there are many more to be found online. However, conventional methods for navigating these libraries are still quite rudimentary, and often involve scrolling through alphabetical lists. We present AudioQuilt, a system for sample exploration that allows audio clips to be sorted according to user taste, and arranged in any desired 2D formation such that similar samples are located near each other. Our method relies on two advances in machine learning. First, metric learning allows the user to shape the audio feature space to match their own preferences. Second, kernelized sorting finds an optimal arrangement for the samples in 2D. We demonstrate our system with three new interfaces for exploring audio samples, and evaluate the technology qualitatively and quantitatively via a pair of user studies.", "title": "" }, { "docid": "a2e161724489b6210bf29c0c4f721534", "text": "OBJECTIVE\nTo review the results and complications of the surgical treatment of craniosynostosis in 283 consecutive patients treated between 1999 and 2007.\n\n\nPATIENTS AND METHODS\nOur series consisted of 330 procedures performed in 283 patients diagnosed with scaphocephaly (n=155), trigonocephaly (n=50), anterior plagiocephaly (n=28), occipital plagiocephaly (n=1), non-syndromic multi-suture synostosis (n=20), and with diverse craniofacial syndromes (n=32; 11 Crouzon, 11 Apert, 7 Pfeiffer, 2 Saethre-Chotzen, and 2 clover-leaf skull). We used the classification of Whitaker et al. to evaluate the surgical results. Complications of each technique and time of patients' hospitalization were also recorded. The surgeries were classified in 12 different types according to the techniques used. Type I comprised endoscopic assisted osteotomies for sagittal synostosis (42 cases). Type II included sagittal suturectomy and expanding osteotomies (46 cases). Type III encompassed procedures similar to type II but that included frontal dismantling or frontal osteotomies in scaphocephaly (59 cases). Type IV referred to complete cranial vault remodelling (holocranial dismantling) in scaphocephaly (13 cases). Type V belonged to fronto-orbital remodelling without fronto-orbital bandeau in trigonocephaly (50 cases). Type VI included fronto-orbital remodelling without fronto-orbital bandeau in plagiocephaly (14 cases). In Type VII cases of plagiocephaly with frontoorbital remodelling and fronto-orbital bandeau were comprised (14 cases). Type VIII consisted of occipital advancement in posterior plagiocephaly (1 case). Type IX included standard bilateral fronto-orbital advancement with expanding osteotomies (30 cases). Type X was used in multi-suture craniosynostosis (15 cases) and consisted of holocranial dismantling (complete cranial vault remodelling). Type XI included occipital and suboccipital craniectomies in multiple suture craniosynostosis (10 cases) and Type XII instances of fronto-orbital distraction (26 cases).\n\n\nRESULTS\nThe mortality rate of the series was 2 out of 283 cases (0.7%). These 2 patients died one year after surgery. All complications were resolved without permanent deficit. Mean age at surgery was 6.75 months. According to Whitaker et al's classification, 191 patients were classified into Category I (67.49%), 51 into Category II (18.02%), 30 into Category III (10.6%) and 14 into Category IV (4.90%). Regarding to craniofacial conformation, 85.5 % of patients were considered as a good result and 15.5% of patients as a poor result. Of the patients with poor results, 6.36% were craniofacial syndromes, 2.12% had anterior plagiocephaly and 1.76% belonged to non-syndromic craniosynostosis. The most frequent complication was postoperative hyperthermia of undetermined origin (13.43% of the cases), followed by infection (7.5%), subcutaneous haematoma (5.3%), dural tears (5%), and CSF leakage (2.5%). The number of complications was higher in the group of re-operated patients (12.8% of all). In this subset of reoperations, infection accounted for 62.5%, dural tears for 93% and CSF leaks for 75% of the total. In regard to the surgical procedures, endoscopic assisted osteotomies presented the lowest rate of complications, followed by standard fronto-orbital advancement in multiple synostosis, trigonocephaly and plagiocephaly. The highest number of complications occurred in complete cranial vault remodelling (holocranial dismantling) in scaphocephaly and multiple synostoses and after the use of internal osteogenic distractors. Of note, are two cases of iatrogenic basal encephalocele that occurred after combined fronto-facial distraction.\n\n\nCONCLUSIONS\nThe best results were obtained in patients with isolated craniosynostosis and the worst in cases with syndromic and multi-suture craniosynostosis. The rate and severity of complications were related to the type of surgical procedure and was higher among patients undergoing re-operations. The mean time of hospitalization was also modified by these factors. Finally, we report our considerations for the management of craniosynostosis taking into account each specific technique and the age at surgery, complication rates and the results of the whole series.", "title": "" }, { "docid": "1f7fa34fd7e0f4fd7ff9e8bba2a78e3c", "text": "Today many multi-national companies or organizations are adopting the use of automation. Automation means replacing the human by intelligent robots or machines which are capable to work as human (may be better than human). Artificial intelligence is a way of making machines, robots or software to think like human. As the concept of artificial intelligence is use in robotics, it is necessary to understand the basic functions which are required for robots to think and work like human. These functions are planning, acting, monitoring, perceiving and goal reasoning. These functions help robots to develop its skills and implement it. Since robotics is a rapidly growing field from last decade, it is important to learn and improve the basic functionality of robots and make it more useful and user-friendly.", "title": "" }, { "docid": "1997b8a0cac1b3beecfd79b3e206d7e4", "text": "Scatterplots are well established means of visualizing discrete data values with two data variables as a collection of discrete points. We aim at generalizing the concept of scatterplots to the visualization of spatially continuous input data by a continuous and dense plot. An example of a continuous input field is data defined on an n-D spatial grid with respective interpolation or reconstruction of in-between values. We propose a rigorous, accurate, and generic mathematical model of continuous scatterplots that considers an arbitrary density defined on an input field on an n-D domain and that maps this density to m-D scatterplots. Special cases are derived from this generic model and discussed in detail: scatterplots where the n-D spatial domain and the m-D data attribute domain have identical dimension, 1-D scatterplots as a way to define continuous histograms, and 2-D scatterplots of data on 3-D spatial grids. We show how continuous histograms are related to traditional discrete histograms and to the histograms of isosurface statistics. Based on the mathematical model of continuous scatterplots, respective visualization algorithms are derived, in particular for 2-D scatterplots of data from 3-D tetrahedral grids. For several visualization tasks, we show the applicability of continuous scatterplots. Since continuous scatterplots do not only sample data at grid points but interpolate data values within cells, a dense and complete visualization of the data set is achieved that scales well with increasing data set size. Especially for irregular grids with varying cell size, improved results are obtained when compared to conventional scatterplots. Therefore, continuous scatterplots are a suitable extension of a statistics visualization technique to be applied to typical data from scientific computation.", "title": "" }, { "docid": "4c48a0be3e0194e57d9e08c1befeb7f7", "text": "During preclinical investigations into the safety of drugs and chemicals, many are found to interfere with reproductive function in the female rat. This interference is commonly expressed as a change in normal morphology of the reproductive tract or a disturbance in the duration of particular phases of the estrous cycle. Such alterations can be recognized only if the pathologist has knowledge of the continuously changing histological appearance of the various components of the reproductive tract during the cycle and can accurately and consistently ascribe an individual tract to a particular phase of the cycle. Unfortunately, although comprehensive reports illustrating the normal appearance of the tract during the rat estrous cycle have been available over many years, they are generally somewhat ambiguous about distinct criteria for defining the end of one stage and the beginning of another. This detail is absolutely essential to achieve a consistent approach to staging the cycle. For the toxicologic pathologist, this report illustrates a pragmatic and practical approach to staging the estrous cycle in the rat based on personal experience and a review of the literature from the last century.", "title": "" }, { "docid": "517d9e98352aa626cecae9e17cbbbc97", "text": "The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network. In natural language processing, sequence-to-sequence (Seq2Seq) models typically serve as encoderdecoder networks. When combined with a traditional (deterministic) attention mechanism, the variational latent space may be bypassed by the attention model, and thus becomes ineffective. In this paper, we propose a variational attention mechanism for VED, where the attention vector is also modeled as Gaussian distributed random variables. Results on two experiments show that, without loss of quality, our proposed method alleviates the bypassing phenomenon as it increases the diversity of generated sentences.1", "title": "" }, { "docid": "867041312ec43a2b13937e9b82d68dc5", "text": "This paper presents a method of implementing impedance control (with inertia, damping, and stiffness terms) on a dual-arm system by using the relative Jacobian technique. The proposed method significantly simplifies the control implementation because the dual arm is treated as a single manipulator, whose end-effector motion is defined by the relative motion between the two end effectors. As a result, task description becomes simpler and more intuitive when specifying the desired impedance and the desired trajectories. This is the basis for the relative impedance control. In addition, the use of time-delay estimation enhances ease of implementation of our proposed method to a physical system, which would have been otherwise a very tedious and complicated process.", "title": "" }, { "docid": "279de90035c16de3f3acfcd4f352a3c9", "text": "Purpose – To develop a model that bridges the gap between CSR definitions and strategy and offers guidance to managers on how to connect socially committed organisations with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. Design/methodology/approach – This paper offers a critical evaluation of the theoretical foundations of corporate responsibility (CR) and proposes a new strategic approach to CR, which seeks to overcome the limitations of normative definitions. To address this perceived issue, the authors propose a new processual model of CR, which they refer to as the 3C-SR model. Findings – The 3C-SR model can offer practical guidelines to managers on how to connect with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. It is argued that many of the redefinitions of CR for a contemporary audience are normative exhortations (“calls to arms”) that fail to provide managers with the conceptual resources to move from “ought” to “how”. Originality/value – The 3C-SR model offers a novel approach to CR in so far as it addresses strategy, operations and markets in a single framework.", "title": "" }, { "docid": "2af36afd2440a4940873fef1703aab3f", "text": "In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.", "title": "" }, { "docid": "6af2c0794e5acc5633bd445ff28aea1a", "text": "Twitter is a microblogging platform that allows users to post public short messages. Posts shared by users pertaining to real-world events or themes can provide a rich “on-theground” live update of the events for the benefit of everyone. Unfortunately, the posted information may not be all credible and rumours can spread over this platform. Existing credibility assessment work have focused on identifying features for discriminating the credibility of messages at the tweet level. However, they do not handle tweets that contain multiple pieces of information, each of which may have different level of credibility. In this work, we introduce the notion of a claim based on subject and predicate terms, and propose a framework to identify claims from a corpus of tweets related to some major event or theme. Specifically, we draw upon work done in open information extraction to extract from tweets, tuples that comprises of subjects and their predicate. Then we cluster these tuples to identify claims such that each claim refers to only one aspect of the event. Tweets corresponding to the tuples in each cluster serve as evidence supporting subsequent credibility assessment task. Extensive experiments on two real world datasets shows the effectiveness of the proposed approach in identifying claims.", "title": "" }, { "docid": "e0d511ff3770cfe83656c4822be6e9f8", "text": "Most of industrial induction motors currently used employ simple winding patterns, which commonly are designed to fulfil the fundamental magnetizing flux and torque requirements, disregarding the spatial harmonic content of the air-gap magnetomotive force (MMF). However, it is well known that the lower-order MMF spatial harmonics have a negative impact on the motor efficiency, vibration, noise, and torque production. The use of different turns per coil in the winding design is a possible solution to mitigate the problem. In this paper, a novel winding optimizing algorithm is fully described. The air-gap is modelled as a linear function of the current-sheet created by the conductors in the slots. Several winding patterns with different poles for stators with different slots are optimized, and the turns per coil pattern is presented in tables for single and double layer windings with optimal coil pitch shortening. These tables can be used, as reference, in winding design projects. An application example of winding optimization is also presented.", "title": "" }, { "docid": "6cfc078d0b908cb020417d4503e5bade", "text": "How does an entrepreneur’s social network impact crowdfunding? Based on social capital theory, we developed a research model and conducted a comparative study using objective data collected from China and the U.S. We found that an entrepreneur’s social network ties, obligations to fund other entrepreneurs, and the shared meaning of the crowdfunding project between the entrepreneur and the sponsors had significant effects on crowdfunding performance in both China and the U.S. The predictive power of the three dimensions of social capital was stronger in China than it was in the U.S. Obligation also had a greater impact in China. 2014 Elsevier B.V. All rights reserved. § This study is supported by the Natural Science Foundation of China (71302186), the Chinese Ministry of Education Humanities and Social Sciences Young Scholar Fund (12YJCZH306), the China National Social Sciences Fund (11AZD077), and the Fundamental Research Funds for the Central Universities (JBK120505). * Corresponding author. Tel.: +1 218 726 7334. E-mail addresses: haichao_zheng@163.com (H. Zheng), dli@d.umn.edu (D. Li), kaitlynwu@swufe.edu.cn (J. Wu), xuyun@swufe.edu.cn (Y. Xu).", "title": "" }, { "docid": "e41079edd8ad3d39b22397d669f7af61", "text": "Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes. (PsycINFO Database Record", "title": "" }, { "docid": "36b232e486ee4c9885a51a1aefc8f12b", "text": "Graphics processing units (GPUs) are a powerful platform for building high-speed network traffic processing applications using low-cost hardware. Existing systems tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic operations and pattern matching. However, they still suffer from significant overheads due to criticalpath operations that are still being carried out on the CPU, and redundant inter-device data transfers. In this paper we present GASPP, a programmable network traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPUbased implementations of a broad range of operations commonly used in network traffic processing applications, including the first purely GPU-based implementation of network flow tracking and TCP stream reassembly. GASPP also employs novel mechanisms for tackling control flow irregularities across SIMT threads, and sharing memory context between the network interface and the GPU. Our evaluation shows that GASPP can achieve multi-gigabit traffic forwarding rates even for computationally intensive and complex network operations such as stateful traffic classification, intrusion detection, and packet encryption. Especially when consolidating multiple network applications on the same device, GASPP achieves up to 16.2× speedup compared to standalone GPU-based implementations of the same applications.", "title": "" }, { "docid": "45f8ee067c8e70b64ba879cf9415e107", "text": "Visualizing the intellectual structure of scientific domains using co-cited units such as references or authors has become a routine for domain analysis. In previous studies, paper-reference matrices are usually transformed into reference-reference matrices to obtain co-citation relationships, which are then visualized in different representations, typically as node-link networks, to represent the intellectual structures of scientific domains. Such network visualizations sometimes contain tightly knit components, which make visual analysis of the intellectual structure a challenging task. In this study, we propose a new approach to reveal co-citation relationships. Instead of using a reference-reference matrix, we directly use the original paper-reference matrix as the information source, and transform the paper-reference matrix into an FP-tree and visualize it in a Java-based prototype system. We demonstrate the usefulness of our approach through visual analyses of the intellectual structure of two domains: information visualization and Sloan Digital Sky Survey (SDSS). The results show that our visualization not only retains the major information of co-citation relationships, but also reveals more detailed sub-structures of tightly knit clusters than a conventional node-link network visualization.", "title": "" }, { "docid": "41df403d437a17cb65915b755060ef8a", "text": "User verification systems that use a single biometric indicator often have to contend with noisy sensor data, restricted degrees of freedom, non-universality of the biometric trait and unacceptable error rates. Attempting to improve the performance of individual matchers in such situations may not prove to be effective because of these inherent problems. Multibiometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. These systems help achieve an increase in performance that may not be possible using a single biometric indicator. Further, multibiometric systems provide anti-spoofing measures by making it difficult for an intruder to spoof multiple biometric traits simultaneously. However, an effective fusion scheme is necessary to combine the information presented by multiple domain experts. This paper addresses the problem of information fusion in biometric verification systems by combining information at the matching score level. Experimental results on combining three biometric modalities (face, fingerprint and hand geometry) are presented.", "title": "" }, { "docid": "5378e05d2d231969877131a011b3606a", "text": "Environmental, health, and safety (EHS) concerns are receiving considerable attention in nanoscience and nanotechnology (nano) research and development (R&D). Policymakers and others have urged that research on nano's EHS implications be developed alongside scientific research in the nano domain rather than subsequent to applications. This concurrent perspective suggests the importance of early understanding and measurement of the diffusion of nano EHS research. The paper examines the diffusion of nano EHS publications, defined through a set of search terms, into the broader nano domain using a global nanotechnology R&D database developed at Georgia Tech. The results indicate that nano EHS research is growing rapidly although it is orders of magnitude smaller than the broader nano S&T domain. Nano EHS work is moderately multidisciplinary, but gaps in biomedical nano EHS's connections with environmental nano EHS are apparent. The paper discusses the implications of these results for the continued monitoring and development of the cross-disciplinary utilization of nano EHS research.", "title": "" }, { "docid": "adf57fe7ec7ab1481561f7664110a1e8", "text": "This paper presents a scalable 28-GHz phased-array architecture suitable for fifth-generation (5G) communication links based on four-channel ( $2\\times 2$ ) transmit/receive (TRX) quad-core chips in SiGe BiCMOS with flip-chip packaging. Each channel of the quad-core beamformer chip has 4.6-dB noise figure (NF) in the receive (RX) mode and 10.5-dBm output 1-dB compression point (OP1dB) in the transmit (TX) mode with 6-bit phase control and 14-dB gain control. The phase change with gain control is only ±3°, allowing orthogonality between the variable gain amplifier and the phase shifter. The chip has high RX linearity (IP1dB = −22 dBm/channel) and consumes 130 mW in the RX mode and 200 mW in the TX mode at P1dB per channel. Advantages of the scalable all-RF beamforming architecture and circuit design techniques are discussed in detail. 4- and 32-element phased-arrays are demonstrated with detailed data link measurements using a single or eight of the four-channel TRX core chips on a low-cost printed circuit board with microstrip antennas. The 32-element array achieves an effective isotropic radiated power (EIRP) of 43 dBm at P1dB, a 45-dBm saturated EIRP, and a record-level system NF of 5.2 dB when the beamformer loss and transceiver NF are taken into account and can scan to ±50° in azimuth and ±25° in elevation with < −12-dB sidelobes and without any phase or amplitude calibration. A wireless link is demonstrated using two 32-element phased-arrays with a state-of-the-art data rate of 1.0–1.6 Gb/s in a single beam using 16-QAM waveforms over all scan angles at a link distance of 300 m.", "title": "" }, { "docid": "285f57b2b37636c417459f5d886a7982", "text": "We have prepared a set of notes incorporating the visual aids used during the Information Extraction Tu-torial for the IJCAI-99 tuto-rial series. This document also contains additional information , such as the URLs of stes on the World Wide Web containing additional information likely to be of interest. If you are reading this document using an appropriately configured Acrobat Reader (available free from Adobe at http:// w w w. a d o b e. c o m / p r o d i n d e x / a c r o b a t / readstep.html) is appropriately configured, you can go directly to these URLs in your web browser by clicking them. This tutorial is designed to introduce you to the fundamental concepts of information extraction (IE) technology, and to give you an idea of what the state of the art performance in extraction technology is, what is involved in building IE systems, and various approaches taken to their design and implementation, and the kinds of resources and tools that are available to assist in constructing information extraction systems, including linguistic resources such as lexicons and name lists, as well as tools for annotating training data for automatically trained systems. Most IE systems process texts in sequential steps (or \" phases \") ranging from lexical and morphological processing, recognition and typing of proper names, parsing of larger syntactic constituents, resolution of anaphora and coreference, and the ultimate extraction of domain-relevent events and relationships from the text. We discuss each of these system components and various approaches to their design. 2 In addition to these tutorial notes, the authors have prepared several other resources related to information extraction of which you may wish to avail yourself. We have created a web page for this tutorial at the URL mentioned in the Power Point slide in the next illustration. This page provides many links of interest to anyone wanting more information about the field of information extraction, including pointers to research sites, commercial sites, and system development tools. We felt that providing this resource would be appreciated by those taking the tutorial, however, we subject ourselves to the risk that some interesting and relevant information has been inadvertently omitted during our preparations. Please do not interpret the presence or absence of a link to any system or research paper to be a positive or negative evaluation of the system or …", "title": "" } ]
scidocsrr
1fe02f64d20bd188e4b5e086afa854cf
Utilizing correlated node mobility for efficient DTN routing
[ { "docid": "1ec4415f1ff6dd2da304cba01e4d6e0c", "text": "In disruption-tolerant networks (DTNs), network topology constantly changes and end-to-end paths can hardly be sustained. However, social network properties are observed in many DTNs and tend to be stable over time. To utilize the social network properties to facilitate packet forwarding, we present LocalCom, a community-based epidemic forwarding scheme that efficiently detects the community structure using limited local information and improves the forwarding efficiency based on the community structure. We define similarity metrics according to nodes’ encounter history to depict the neighboring relationship between each pair of nodes. A distributed algorithm which only utilizes local information is then applied to detect communities, and the formed communities have strong intra-community connections. We also present two schemes to mark and prune gateways that connect communities to control redundancy and facilitate inter-community packet forwarding. Extensive real-trace-driven simulation results are presented to support the effectiveness of our scheme.", "title": "" } ]
[ { "docid": "54abb89b518916b86b306c4a6996dc5c", "text": "Recent clinical trials of gene therapy have shown remarkable therapeutic benefits and an excellent safety record. They provide evidence for the long-sought promise of gene therapy to deliver 'cures' for some otherwise terminal or severely disabling conditions. Behind these advances lie improved vector designs that enable the safe delivery of therapeutic genes to specific cells. Technologies for editing genes and correcting inherited mutations, the engagement of stem cells to regenerate tissues and the effective exploitation of powerful immune responses to fight cancer are also contributing to the revitalization of gene therapy.", "title": "" }, { "docid": "ed2464f8cf0495e10d8b2a75a7d8bc3b", "text": "Personalized services such as news recommendations are becoming an integral part of our digital lives. The problem is that they extract a steep cost in terms of privacy. The service providers collect and analyze user's personal data to provide the service, but can infer sensitive information about the user in the process. In this work we ask the question \"How can we provide personalized news recommendation without sharing sensitive data with the provider?\"\n We propose a local private intelligence assistance framework (PrIA), which collects user data and builds a profile about the user and provides recommendations, all on the user's personal device. It decouples aggregation and personalization: it uses the existing aggregation services on the cloud to obtain candidate articles but makes the personalized recommendations locally. Our proof-of-concept implementation and small scale user study shows the feasibility of a local news recommendation system. In building a private profile, PrIA avoids sharing sensitive information with the cloud-based recommendation service. However, the trade-off is that unlike cloud-based services, PrIA cannot leverage collective knowledge from large number of users. We quantify this trade-off by comparing PrIA with Google's cloud-based recommendation service. We find that the average precision of PrIA's recommendation is only 14% lower than that of Google's service. Rather than choose between privacy or personalization, this result motivates further study of systems that can provide both with acceptable trade-offs.", "title": "" }, { "docid": "877e7654a4e42ab270a96e87d32164fd", "text": "The presence of gender stereotypes in many aspects of society is a well-known phenomenon. In this paper, we focus on studying such stereotypes and bias in Hindi movie industry (Bollywood). We analyze movie plots and posters for all movies released since 1970. The gender bias is detected by semantic modeling of plots at inter-sentence and intrasentence level. Different features like occupation, introduction of cast in text, associated actions and descriptions are captured to show the pervasiveness of gender bias and stereotype in movies. We derive a semantic graph and compute centrality of each character and observe similar bias there. We also show that such bias is not applicable for movie posters where females get equal importance even though their character has little or no impact on the movie plot. Furthermore, we explore the movie trailers to estimate on-screen time for males and females and also study the portrayal of emotions by gender in them. The silver lining is that our system was able to identify 30 movies over last 3 years where such stereotypes were broken.", "title": "" }, { "docid": "24c744337d831e541f347bbdf9b6b48a", "text": "Modelling and animation of crawler UGV's caterpillars is a complicated task, which has not been completely resolved in ROS/Gazebo simulators. In this paper, we proposed an approximation of track-terrain interaction of a crawler UGV, perform modelling and simulation of Russian crawler robot \"Engineer\" within ROS/Gazebo and visualize its motion in ROS/RViz software. Finally, we test the proposed model in heterogeneous robot group navigation scenario within uncertain Gazebo environment.", "title": "" }, { "docid": "4621f0bd002f8bd061dd0b224f27977c", "text": "Organisations increasingly perceive their employees as a great asset that needs to be cared for; however, at the same time, they view employees as one of the biggest potential threats to their cyber security. Employees are widely acknowledged to be responsible for security breaches in organisations, and it is important that these are given as much attention as are technical issues. A significant number of researchers have argued that non-compliance with information security policy is one of the major challenges facing organisations. This is primarily considered to be a human problem rather than a technical issue. Thus, it is not surprising that employees are one of the major underlying causes of breaches in information security. In this paper, academic literature and reports of information security institutes relating to policy compliance are reviewed. The objective is to provide an overview of the key challenges surrounding the successful implementation of information security policies. A further aim is to investigate the factors that may have an influence upon employees' behaviour in relation to information security policy. As a result, challenges to information security policy have been classified into four main groups: security policy promotion; noncompliance with security policy; security policy management and updating; and shadow security. Furthermore, the factors influencing behaviour have been divided into organisational and human factors. Ultimately, this paper concludes that continuously subjecting users to targeted awareness raising and dynamically monitoring their adherence to information security policy should increase the compliance level.", "title": "" }, { "docid": "f2d2979ca63d47ba33fffb89c16b9499", "text": "Shor and Grover demonstrated that a quantum computer can outperform any classical computer in factoring numbers and in searching a database by exploiting the parallelism of quantum mechanics. Whereas Shor's algorithm requires both superposition and entanglement of a many-particle system, the superposition of single-particle quantum states is sufficient for Grover's algorithm. Recently, the latter has been successfully implemented using Rydberg atoms. Here we propose an implementation of Grover's algorithm that uses molecular magnets, which are solid-state systems with a large spin; their spin eigenstates make them natural candidates for single-particle systems. We show theoretically that molecular magnets can be used to build dense and efficient memory devices based on the Grover algorithm. In particular, one single crystal can serve as a storage unit of a dynamic random access memory device. Fast electron spin resonance pulses can be used to decode and read out stored numbers of up to 105, with access times as short as 10-10 seconds. We show that our proposal should be feasible using the molecular magnets Fe8 and Mn12.", "title": "" }, { "docid": "e60813a8d102dc818ebe7db75c39a4f8", "text": "OBJECTIVE\nThe behavioral binaural masking level difference (BMLD) is believed to reflect brain stem processing. However, this conflicts with transient auditory evoked potential research that indicates the auditory brain stem and middle latency responses do not demonstrate the BMLD. The objective of the present study is to investigate the brain stem and cortical mechanisms underlying the BMLD in humans using the brain stem and cortical auditory steady-state responses (ASSRs).\n\n\nDESIGN\nA 500-Hz pure tone, amplitude-modulated (AM) at 80 Hz and 7 (or 13) Hz, was used to elicit brain stem and cortical ASSRs, respectively. The masker was a 200-Hz-wide noise centered on 500 Hz. Eleven adult subjects with normal hearing were tested. Both ASSR (brain stem and cortical) and behavioral thresholds for diotic AM stimuli (when the signal and noise are in phase binaurally: SoNo) and dichotic AM stimuli (when either the signal or noise is 180 degrees out-of-phase between the two ears: SpiNo, SoNpi) were investigated. ASSR and behavioral BMLDs were obtained by subtracting the threshold for the dichotic stimuli from that for the diotic stimuli, respectively. Effects for modulation rate, signal versus noise phase changes, and behavioral versus ASSR measure on the BMLD were investigated.\n\n\nRESULTS\nBehavioral BMLDs (mean = 8.5 to 10.5 dB) obtained are consistent with results from past research. The ASSR results are similar to the pattern of results previously found for the transient auditory brain stem responses and the N1-P2 cortical auditory evoked potential, in that only the cortical ASSRs (7 or 13 Hz) demonstrate BMLDs (mean = 5.8 dB); the brain stem ASSRs (80 Hz) (mean = 1.5 dB) do not. The ASSR results differ from the previous transient N1-P2 studies, however, in that the cortical ASSRs show a BMLD only when there is a change in the signal interaural phase, but not for changes of noise interaural phase.\n\n\nCONCLUSIONS\nResults suggest that brain processes underlying the BMLD occur either in a different pathway or beyond the brain stem auditory processing underlying the 80-Hz ASSR. Results also suggest that the cortical ASSRs have somewhat different neural sources than the transient N1-P2 responses, and that they may reflect the output of neural populations that previous research has shown to be insensitive to binaural differences in noise.", "title": "" }, { "docid": "6fd71fe20e959bfdde866ff54b2b474b", "text": "The IETF developed the RPL routing protocol for Low power and Lossy Networks (LLNs). RPL allows for automated setup and maintenance of the routing tree for a meshed network using a common objective, such as energy preservation or most stable routes. To handle failing nodes and other communication disturbances, RPL includes a number of error correction functions for such situations. These error handling mechanisms, while maintaining a functioning routing tree, introduce an additional complexity to the routing process. Being a relatively new protocol, the effect of the error handling mechanisms within RPL needs to be analyzed. This paper presents an experimental analysis of RPL’s error correction mechanisms by using the Contiki RPL implementation along with an SNMP agent to monitor the performance of RPL.", "title": "" }, { "docid": "bffc44d02edaa8a699c698185e143d22", "text": "Photoplethysmography (PPG) technology has been used to develop small, wearable, pulse rate sensors. These devices, consisting of infrared light-emitting diodes (LEDs) and photodetectors, offer a simple, reliable, low-cost means of monitoring the pulse rate noninvasively. Recent advances in optical technology have facilitated the use of high-intensity green LEDs for PPG, increasing the adoption of this measurement technique. In this review, we briefly present the history of PPG and recent developments in wearable pulse rate sensors with green LEDs. The application of wearable pulse rate monitors is discussed.", "title": "" }, { "docid": "66f0474d3f68a8a3b4bbc721a0607e38", "text": "Binary Division is one of the most crucial and silicon-intensive and of immense importance in the field of hardware implementation. A Divider is one of the key hardware blocks in most of applications such as digital signal processing, encryption and decryption algorithms in cryptography and in other logical computations. Being sequential type of operation, it is more prominent in terms of computational complexity and latency. This paper deals with the novel division algorithm for single precision floating point division Verilog Code is written and implemented on Virtex-5 FPGA series. Power dissipation has been reduced. Moreover, significant improvement has been observed in terms of area-utilisation and latency bounds. KeywordsSingle precision, Binary Division, Long Division, Vedic, Virtex, FPGA, IEEE-754.", "title": "" }, { "docid": "116463e16452d6847c94f662a90ac2ef", "text": "The ubiquity of mobile devices with global positioning functionality (e.g., GPS and AGPS) and Internet connectivity (e.g., 3G andWi-Fi) has resulted in widespread development of location-based services (LBS). Typical examples of LBS include local business search, e-marketing, social networking, and automotive traffic monitoring. Although LBS provide valuable services for mobile users, revealing their private locations to potentially untrusted LBS service providers pose privacy concerns. In general, there are two types of LBS, namely, snapshot and continuous LBS. For snapshot LBS, a mobile user only needs to report its current location to a service provider once to get its desired information. On the other hand, a mobile user has to report its location to a service provider in a periodic or on-demand manner to obtain its desired continuous LBS. Protecting user location privacy for continuous LBS is more challenging than snapshot LBS because adversaries may use the spatial and temporal correlations in the user's location samples to infer the user's location information with higher certainty. Such user location trajectories are also very important for many applications, e.g., business analysis, city planning, and intelligent transportation. However, publishing such location trajectories to the public or a third party for data analysis could pose serious privacy concerns. Privacy protection in continuous LBS and trajectory data publication has increasingly drawn attention from the research community and industry. In this survey, we give an overview of the state-of-the-art privacy-preserving techniques in these two problems.", "title": "" }, { "docid": "b4ed15850674851fb7e479b7181751d7", "text": "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.", "title": "" }, { "docid": "a00065c171175b84cf299718d0b29dde", "text": "Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.", "title": "" }, { "docid": "b5f2072ebee7b06bf14981f3a328ec67", "text": "Scenario generation is an important step in the operation and planning of power systems with high renewable penetrations. In this work, we proposed a data-driven approach for scenario generation using generative adversarial networks, which is based on two interconnected deep neural networks. Compared with existing methods based on probabilistic models that are often hard to scale or sample from, our method is data-driven, and captures renewable energy production patterns in both temporal and spatial dimensions for a large number of correlated resources. For validation, we use wind and solar times-series data from NREL integration data sets. We demonstrate that the proposed method is able to generate realistic wind and photovoltaic power profiles with full diversity of behaviors. We also illustrate how to generate scenarios based on different conditions of interest by using labeled data during training. For example, scenarios can be conditioned on weather events (e.g., high wind day, intense ramp events, or large forecasts errors) or time of the year (e.g., solar generation for a day in July). Because of the feedforward nature of the neural networks, scenarios can be generated extremely efficiently without sophisticated sampling techniques.", "title": "" }, { "docid": "73c7c4ddfa01fb2b14c6a180c3357a55", "text": "Neurodevelopmental treatment according to Dr. K. and B. Bobath can be supplemented by hippotherapy. At proper control and guidance, an improvement in posture tone, inhibition of pathological movement patterns, facilitation of normal automatical reactions and the promotion of sensorimotor perceptions is achieved. By adjustment to the swaying movements of the horse, the child feels how to retain straightening alignment, symmetry and balance. By pleasure in this therapy, the child can be motivated to satisfactory cooperation and accepts the therapy horse as its friend. The results of hippotherapy for 27 children afflicted with cerebral palsy permit a conclusion as to the value of this treatment for movement and behaviour disturbance to the drawn.", "title": "" }, { "docid": "342b72bf32937104ae80ae275c8c9585", "text": "In this paper, we introduce a Radio Frequency IDentification (RFID) based smart shopping system, KONARK, which helps users to checkout items faster and to track purchases in real-time. In parallel, our solution also provides the shopping mall owner with information about user interest on particular items. The central component of KONARK system is a customized shopping cart having a RFID reader which reads RFID tagged items. To provide check-out facility, our system detects in-cart items with almost 100% accuracy within 60s delay by exploiting the fact that the physical level information (RSSI, phase, doppler, read rate etc.) of in-cart RFID tags are different than outside tags. KONARK also detects user interest with 100% accuracy by exploiting the change in physical level parameters of RFID tag on the object user interacted with. In general, KONARK has been shown to perform with reasonably high accuracy in different mobility speeds in a mock-up of a shopping mall isle.", "title": "" }, { "docid": "648a5479933eb4703f1d2639e0c3b5c7", "text": "The Surgery Treatment Modality Committee of the Korean Gynecologic Oncologic Group (KGOG) has determined to develop a surgical manual to facilitate clinical trials and to improve communication between investigators by standardizing and precisely describing operating procedures. The literature on anatomic terminology, identification of surgical components, and surgical techniques were reviewed and discussed in depth to develop a surgical manual for gynecologic oncology. The surgical procedures provided here represent the minimum requirements for participating in a clinical trial. These procedures should be described in the operation record form, and the pathologic findings obtained from the procedures should be recorded in the pathologic report form. Here, we focused on radical hysterectomy and lymphadenectomy, and we developed a KGOG classification for those conditions.", "title": "" }, { "docid": "9d24bc6143bdb22692d0c40f38307612", "text": "This paper proposes a new image denoising approach using adaptive signal modeling and adaptive soft-thresholding. It improves the image quality by regularizing all the patches in image based on distribution modeling in transform domain. Instead of using a global model for all patches, it employs content adaptive models to address the non-stationarity of image signals. The distribution model of each patch is estimated individually and can vary for different transform bands and for different patch locations. In particular, we allow the distribution model for each individual patch to have non-zero expectation. To estimate the expectation and variance parameters for the transform bands of a particular patch, we exploit the non-local correlation of image and collect a set of similar patches as data samples to form the distribution. Irrelevant patches are excluded so that this non-local based modeling is more accurate than global modeling. Adaptive soft-thresholding is employed since we observed that the distribution of non-local samples can be approximated by Laplacian distribution. Experimental results show that the proposed scheme outperforms the state-of-the-art denoising methods such as BM3D and CSR in both the PSNR and the perceptual quality.", "title": "" }, { "docid": "fdefbb2ed3185eadb4657879d9776d34", "text": "Convenient monitoring of vital signs, particularly blood pressure(BP), is critical to improve the effectiveness of health-care and prevent chronic diseases. This study presents a user-friendly, low-cost, real-time, and non-contact technique for BP measurement based on the detection of photoplethysmography (PPG) using a regular webcam. Leveraging features extracted from photoplethysmograph, an individual's BP can be estimated using a neural network. Experiments were performed on 20 human participants during three different daytime slots given the influence of background illumination. Compared against the systolic blood pressure and diastolic blood pressure readings collected from a commercially available BP monitor, the proposed technique achieves an average error rate of 9.62% (Systolic BP) and 11.63% (Diastolic BP) for the afternoon session, and 8.4% (Systolic BP) and 11.18% (Diastolic BP) for the evening session. The proposed technique can be easily extended to the camera on any mobile device and thus be widely used in a pervasive manner.", "title": "" } ]
scidocsrr
885af4ec364f295e717da5d6e0248ced
A Bayesian Foundation for Individual Learning Under Uncertainty
[ { "docid": "a4c76e58074a42133a59a31d9022450d", "text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.", "title": "" } ]
[ { "docid": "b0b024072e7cde0b404a9be5862ecdd1", "text": "Recent studies have led to the recognition of the epidermal growth factor receptor HER3 as a key player in cancer, and consequently this receptor has gained increased interest as a target for cancer therapy. We have previously generated several Affibody molecules with subnanomolar affinity for the HER3 receptor. Here, we investigate the effects of two of these HER3-specific Affibody molecules, Z05416 and Z05417, on different HER3-overexpressing cancer cell lines. Using flow cytometry and confocal microscopy, the Affibody molecules were shown to bind to HER3 on three different cell lines. Furthermore, the receptor binding of the natural ligand heregulin (HRG) was blocked by addition of Affibody molecules. In addition, both molecules suppressed HRG-induced HER3 and HER2 phosphorylation in MCF-7 cells, as well as HER3 phosphorylation in constantly HER2-activated SKBR-3 cells. Importantly, Western blot analysis also revealed that HRG-induced downstream signalling through the Ras-MAPK pathway as well as the PI3K-Akt pathway was blocked by the Affibody molecules. Finally, in an in vitro proliferation assay, the two Affibody molecules demonstrated complete inhibition of HRG-induced cancer cell growth. Taken together, our findings demonstrate that Z05416 and Z05417 exert an anti-proliferative effect on two breast cancer cell lines by inhibiting HRG-induced phosphorylation of HER3, suggesting that the Affibody molecules are promising candidates for future HER3-targeted cancer therapy.", "title": "" }, { "docid": "1a6ece40fa87e787f218902eba9b89f7", "text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.", "title": "" }, { "docid": "659cc5b1999c962c9fb0b3544c8b928a", "text": "During the recent years the mainstream framework for HCI research — the informationprocessing cognitive psychology —has gained more and more criticism because of serious problems in applying it both in research and practical design. In a debate within HCI research the capability of information processing psychology has been questioned and new theoretical frameworks searched. This paper presents an overview of the situation and discusses potentials of Activity Theory as an alternative framework for HCI research and design.", "title": "" }, { "docid": "e5bf05ae6700078dda83eca8d2f65cd4", "text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.", "title": "" }, { "docid": "bc48242b9516948dc0ab95f1bead053f", "text": "This article presents the semantic portal MuseumFinland for publishing heterogeneous museum collections on the Semantic Web. It is shown how museums with their semantically rich and interrelated collection content can create a large, consolidated semantic collection portal together on the web. By sharing a set of ontologies, it is possible to make collections semantically interoperable, and provide the museum visitors with intelligent content-based search and browsing services to the global collection base. The architecture underlying MuseumFinland separates generic search and browsing services from the underlying application dependent schemas and metadata by a layer of logical rules. As a result, the portal creation framework and software developed has been applied successfully to other domains as well. MuseumFinland got the Semantic Web Challence Award (second prize) in 2004. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ae6feb822ce68f336d831559b17c4c31", "text": "Despite years of intensive research, Byzantine fault-tolerant (BFT) systems have not yet been adopted in practice. This is due to additional cost of BFT in terms of resources, protocol complexity and performance, compared with crash fault-tolerance (CFT). This overhead of BFT comes from the assumption of a powerful adversary that can fully control not only the Byzantine faulty machines, but at the same time also the message delivery schedule across the entire network, effectively inducing communication asynchrony and partitioning otherwise correct machines at will. To many practitioners, however, such strong attacks appear irrelevant. In this paper, we introduce cross fault tolerance or XFT, a novel approach to building reliable and secure distributed systems and apply it to the classical state-machine replication (SMR) problem. In short, an XFT SMR protocol provides the reliability guarantees of widely used asynchronous CFT SMR protocols such as Paxos and Raft, but also tolerates Byzantine faults in combination with network asynchrony, as long as a majority of replicas are correct and communicate synchronously. This allows the development of XFT systems at the price of CFT (already paid for in practice), yet with strictly stronger resilience than CFT — sometimes even stronger than BFT itself. As a showcase for XFT, we present XPaxos, the first XFT SMR protocol, and deploy it in a geo-replicated setting. Although it offers much stronger resilience than CFT SMR at no extra resource cost, the performance of XPaxos matches that of the state-of-the-art CFT protocols.", "title": "" }, { "docid": "74497fc5d50ad6047d428714bfbba6b8", "text": "Newer models for interacting with wireless sensors such as Internet of Things and Sensor Cloud aim to overcome restricted resources and efficiency. The Missouri S&T (science and technology) sensor cloud enables different networks, spread in a huge geographical area, to connect together and be employed simultaneously by multiple users on demand. Virtual sensors, which are at the core of this sensor cloud architecture, assist in creating a multiuser environment on top of resource-constrained physical wireless sensors and can help in supporting multiple applications.", "title": "" }, { "docid": "fcd80cdb7d2d629f767f04b38c696355", "text": "Electronic commerce and electronic business greatly need new payment systems that will support their further development. To better understand problems and perspectives of the electronic payment systems this article describes a classification and different characteristic aspects of payment systems. It suggests distinctions between payment systems and mediating systems, and is trying to illustrate advantages and limitations of diverse categories of payment systems using the defined characteristics. It is highlighting importance of userrelated aspects in design and introduction of electronic payment systems for mass customers.", "title": "" }, { "docid": "e48c260c2a0ef52c1aff8d11a3dc071e", "text": "Current transformer (CT) saturation can cause protective relay mal-operation or even prevent tripping. The wave shape of the secondary current is severely distorted as the CT is forced into deep saturation when the residual flux in the core adds to the flux change caused by faults. In this paper, a morphological lifting scheme is proposed to extract features contained in the waveform of the signal. The detection of the CT saturation is accurately achieved and the points of the inflection, where the saturation begins and ends, are found with the scheme used. This paper also presents a compensation algorithm, based upon the detection results, to reconstruct healthy secondary currents. The proposed morphological lifting scheme and compensation algorithm are demonstrated on a sample power system. The simulation results clearly indicate that they can successfully detect and compensate the distorted secondary current of a saturated CT with residual flux.", "title": "" }, { "docid": "d087d4d0bb41f655f0743cf8e0963f0c", "text": "A GTO current source inverter which consists of six main GTO's, two auxiliary GTO's, and three capacitors is presented. This inverter can supply both the sinusoidal voltage and current to the motor by pulsewidth modulation (PWM) techniques. The normal PWM pattern produced by two control signals with the carrier and the modulating waves and the optimal PWM pattern determined by the harmonic analysis are described. The experimental waveforms for 2.2-kW induction motor drives are given and the circuit operation of this inverter in the PWM technique is clearly shown. In addition, the steady-state characteristics of this inverter-induction motor drive system are analyzed by the state-variable methods, and a close agreement between the analyzed and the experimental waveforms is obtained. It is shown that the harmonic components are eliminated or reduced by using the optimal PWM pattern, and the new inverter with sinusoidal current and voltage is very excellent for ac motor drive.", "title": "" }, { "docid": "7c3f14bbbb3cf2bbe7c9caaf42361445", "text": "In this paper, we present a method for generating fast conceptual urban design prototypes. We synthesize spatial configurations for street networks, parcels and building volumes. Therefore, we address the problem of implementing custom data structures for these configurations and how the generation process can be controlled and parameterized. We exemplify our method by the development of new components for Grasshopper/Rhino3D and their application in the scope of selected case studies. By means of these components, we show use case applications of the synthesis algorithms. In the conclusion, we reflect on the advantages of being able to generate fast urban design prototypes, but we also discuss the disadvantages of the concept and the usage of Grasshopper as a user interface.", "title": "" }, { "docid": "46ea64a204ae93855676146d84063c1a", "text": "PURPOSE\nThe present study examined the utility of 2 measures proposed as markers of specific language impairment (SLI) in identifying specific impairments in language or working memory in school-age children.\n\n\nMETHOD\nA group of 400 school-age children completed a 5-min screening consisting of nonword repetition and sentence recall. A subset of low (n = 52) and average (n = 38) scorers completed standardized tests of language, short-term and working memory, and nonverbal intelligence.\n\n\nRESULTS\nApproximately equal numbers of children were identified with specific impairments in either language or working memory. A group about twice as large had deficits in both language and working memory. Sensitivity of the screening measure for both SLI and specific working memory impairments was 84% or greater, although specificity was closer to 50%. Sentence recall performance below the 10th percentile was associated with sensitivity and specificity values above 80% for SLI.\n\n\nCONCLUSIONS\nDevelopmental deficits may be specific to language or working memory, or include impairments in both areas. Sentence recall is a useful clinical marker of SLI and combined language and working memory impairments.", "title": "" }, { "docid": "ec9b3423e0a71e8b9457f10eb874f2bc", "text": "PURPOSE\nThe term \"buried penis\" has been applied to a variety of penile abnormalities and includes an apparent buried penis that is obvious at birth. The purpose of this study was to examine prospectively the congenital buried penis and to evaluate an operative technique for its management.\n\n\nMATERIALS AND METHODS\nA total of 31 males 2 to 28 months old (mean age 12.3 months) with a congenital buried penis underwent surgical correction of the anomaly. Measurements were made of the penile shaft skin, inner leaf of the prepuce, glans length and stretched penile length. Observations of the subcutaneous tissue of the penis were made. The outer leaf of the prepuce was resected, following which covering of the penile shaft was accomplished with a combination of the penile shaft skin and the inner leaf of the prepuce.\n\n\nRESULTS\nStretched penile lengths ranged from 2.3 to 4.1 cm (mean 3.1). The glans length from the tip of the glans dorsally to the corona ranged from 0.9 to 1.6 cm (mean 1.2). The inner leaf of the prepuce ranged from 0.9 to 2.2 cm (mean 1.5) in length, while the dorsal penile skin lengths were 1 to 1.6 cm (mean 0.8). In all patients complete shaft coverage was accomplished using a combination of varying degrees of penile shaft skin and inner leaf of the prepuce. In no case was there a requirement for either unfurling of the inner and outer leaf of the prepuce or mobilization of scrotal flaps to accomplish shaft coverage. All patients healed well and have done well with a followup of 6 months to 1 year.\n\n\nCONCLUSIONS\nCongenital buried penis is a syndrome consisting of a paucity of penile shaft skin and a short penile shaft. The anomaly may be successfully repaired by carefully preserving a length of inner leaf of the prepuce sufficient to cover, in some instances, the length of the penile shaft. Anchoring of the penile skin to the shaft is not recommended.", "title": "" }, { "docid": "7b4e9043e11d93d8152294f410390f6d", "text": "In this paper, we present a series of methods to authenticate a user with a graphical password. To that end, we employ the user¿s personal handheld device as the password decoder and the second factor of authentication. In our methods, a service provider challenges the user with an image password. To determine the appropriate click points and their order, the user needs some hint information transmitted only to her handheld device. We show that our method can overcome threats such as key-loggers, weak password, and shoulder surfing. With the increasing popularity of handheld devices such as cell phones, our approach can be leveraged by many organizations without forcing the user to memorize different passwords or carrying around different tokens.", "title": "" }, { "docid": "46632965f75d0b07c8f35db944277ab1", "text": "The aim of this cross-sectional study was to assess the complications associated with tooth supported fixed dental prosthesis amongst patients reporting at University College of Dentistry Lahore, Pakistan. An interview based questionnaire was used on 112 patients followed by clinical oral examination by two calibrated dentists. Approximately 95% participants were using porcelain fused to metal prosthesis with 60% of prosthesis being used in posterior segments of mouth. Complications like dental caries, coronal abutment fracture, radicular abutment fracture, occlusal interferences, root canal failures and decementations were more significantly associated with crowns than bridges (p=0.000). On the other hand esthetic issues, periapical lesions, periodontal problems, porcelain fractures and metal damage were more commonly associated with bridges (p=0.000). All cases of dental caries reported were associated with acrylic crown and bridges, whereas all coronal abutment fractures were associated with metal prosthesis (p=0.000). A significantly higher number of participants who got their fixed dental prosthesis from other sources i.e. Paramedics, technicians, dental assistants or unqualified dentists had periapical lesions, decementations, esthetic issues and periodontal diseases. This association was found to be statistically significant (p=0.000). Complications associated with fixed dental prosthesis like root canal failures, decementations, periapical lesions and periodontal disease were more significantly associated with prosthesis fabricated by other sources over the period of 5 to 10 years.", "title": "" }, { "docid": "7cc9b6f1837d992b64071e2149e81a9a", "text": "This article presents an application of Augmented Reality technology for interior design. Plus, an Educational Interior Design Project is reviewed. Along with the dramatic progress of digital technology, virtual information techniques are also required for architectural projects. Thus, the new technology of Augmented Reality offers many advantages for digital architectural design and construction fields. AR is also being considered as a new design approach for interior design. In an AR environment, the virtual furniture can be displayed and modified in real-time on the screen, allowing the user to have an interactive experience with the virtual furniture in a real-world environment. Here, AR environment is exploited as the new working environment for architects in architectural design works, and then they can do their work conveniently as such collaborative discussion through AR environment. Finally, this study proposes a new method for applying AR technology to interior design work, where a user can view virtual furniture and communicate with 3D virtual furniture data using a dynamic and flexible user interface. Plus, all the properties of the virtual furniture can be adjusted using occlusionbased interaction method for a Tangible Augmented Reality.", "title": "" }, { "docid": "98cef46a572d3886c8a11fa55f5ff83c", "text": "Deep convolutional neural networks (CNNs) have proven highly effective for visual recognition, where learning a universal representation from activations of convolutional layer plays a fundamental problem. In this paper, we present Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep architecture that quantizes the local activations of convolutional layer in a deep generative model, by training them in an end-to-end manner. To incorporate FV encoding strategy into deep generative models, we introduce Variational Auto-Encoder model, which steers a variational inference and learning in a neural network which can be straightforwardly optimized using standard stochastic gradient method. Different from the FV characterized by conventional generative models (e.g., Gaussian Mixture Model) which parsimoniously fit a discrete mixture model to data distribution, the proposed FV-VAE is more flexible to represent the natural property of data for better generalization. Extensive experiments are conducted on three public datasets, i.e., UCF101, ActivityNet, and CUB-200-2011 in the context of video action recognition and fine-grained image classification, respectively. Superior results are reported when compared to state-of-the-art representations. Most remarkably, our proposed FV-VAE achieves to-date the best published accuracy of 94.2% on UCF101.", "title": "" }, { "docid": "6c52a9b8e7075ba78020f7ac246d7dd6", "text": "A microgrid is a controllable component of the smart grid defined as a part of distribution network capable of supplying its own local load even in the case of disconnection from the upstream network. Microgrids incorporate large amount of renewable and non-renewable distributed generation (DG) that are connected to the system either directly or by power electronics (PE) interface. The diversity of technologies used in DGs and loads, high penetration of DGs, economic operation of DGs, dynamics of low-inertia conventional DGs and PE interfaced inertialess DGs and smart operation by means of enhanced communication infrastructure have raised challenges in widespread utilization of microgrids as basis of smart grids. Power quality, protection, economic and secure operation, active management, communication, dynamics and control of microgrids are among the most important issues under research both in academy and industry. Technical concerns over dynamics of microgrids especially in autonomous (island) mode necessitate revision of current paradigms in control of energy systems. This paper addresses current challenges towards controlling microgrids and surveys dynamic modeling, stability and control of microgrids. Future trends in realizing smart grids through aggregation of microgrids and research needs in this path are discussed at the end of this paper.", "title": "" }, { "docid": "617e76bde28655d92eac1e22f5f56e32", "text": "OBJECTIVE\nTo determine overall, test-retest and inter-rater reliability of posture indices among persons with idiopathic scoliosis.\n\n\nDESIGN\nA reliability study using two raters and two test sessions.\n\n\nSETTING\nTertiary care paediatric centre.\n\n\nPARTICIPANTS\nSeventy participants aged between 10 and 20 years with different types of idiopathic scoliosis (Cobb angle 15 to 60°) were recruited from the scoliosis clinic.\n\n\nMAIN OUTCOME MEASURES\nBased on the XY co-ordinates of natural reference points (e.g., eyes) as well as markers placed on several anatomical landmarks, 32 angular and linear posture indices taken from digital photographs in the standing position were calculated from a specially developed software program. Generalisability theory served to estimate the reliability and standard error of measurement (SEM) for the overall, test-retest and inter-rater designs. Bland and Altman's method was also used to document agreement between sessions and raters.\n\n\nRESULTS\nIn the random design, dependability coefficients demonstrated a moderate level of reliability for six posture indices (ϕ=0.51 to 0.72) and a good level of reliability for 26 posture indices out of 32 (ϕ≥0.79). Error attributable to marker placement was negligible for most indices. Limits of agreement and SEM values were larger for shoulder protraction, trunk list, Q angle, cervical lordosis and scoliosis angles. The most reproducible indices were waist angles and knee valgus and varus.\n\n\nCONCLUSIONS\nPosture can be assessed in a global fashion from photographs in persons with idiopathic scoliosis. Despite the good reliability of marker placement, other studies are needed to minimise measurement errors in order to provide a suitable tool for monitoring change in posture over time.", "title": "" }, { "docid": "6182626269d38c81fa63eb2cab91caca", "text": "Environmental management, a term encompassing environmental planning, protection, monitoring, assessment, research, education, conservation and sustainable use of resources, is now accepted as a major guiding factor for sustainable development at the regional and national level. It is now being increasingly recognized that environmental factors and ecological imperatives must be in built to the total planning process if the long-term goal of making industrial development sustainable is to be achieved. Here we will try to define and discuss the role of Environmental Analysis in the strategic management process of organization. The present complex world require as far as is feasible, it consider impact of important factors related to organizations in strategic planning. The strategic planning of business includes all functional subdivisions and forwards them in a united direction. One of these subsystems is human resource management. Strategic human resource management comes after the strategic planning, and followed by strategic human resource planning as a major activity in all the industries. In strategic planning, it can use different analytical methods and techniques that one of them is PEST analysis. This paper introduces how to apply it in a new manner.", "title": "" } ]
scidocsrr
37763c0631aa990242d020566f824f2b
Heterogeneous Knowledge Transfer in Video Emotion Recognition, Attribution and Summarization
[ { "docid": "69504625b05c735dd80135ef106a8677", "text": "The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the groundtruth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.", "title": "" } ]
[ { "docid": "a0b147e6baae3ea7622446da0b8d8e26", "text": "The Web has come a long way since its invention by Berners-Lee, when it focused essentially on visualization and presentation of content for human consumption (Syntactic Web), to a Web providing meaningful content, facilitating the integration between people and machines (Semantic Web). This paper presents a survey of different tools that provide the enrichment of the Web with understandable annotation, in order to make its content available and interoperable between systems. We can group Semantic Annotation tools into the diverse dimensions: dynamicity, storage, information extraction process, scalability and customization. The analysis of the different annotation tools shows that (semi-)automatic and automatic systems aren't as efficient as needed without human intervention and will continue to evolve to solve the challenge. Microdata, RDFa and the new HTML5 standard will certainly bring new contributions to this issue.", "title": "" }, { "docid": "00bc7c810946fa30bf1fdc66e8fb7fc2", "text": "Voluntary motor commands produce two kinds of consequences. Initially, a sensory consequence is observed in terms of activity in our primary sensory organs (e.g., vision, proprioception). Subsequently, the brain evaluates the sensory feedback and produces a subjective measure of utility or usefulness of the motor commands (e.g., reward). As a result, comparisons between predicted and observed consequences of motor commands produce two forms of prediction error. How do these errors contribute to changes in motor commands? Here, we considered a reach adaptation protocol and found that when high quality sensory feedback was available, adaptation of motor commands was driven almost exclusively by sensory prediction errors. This form of learning had a distinct signature: as motor commands adapted, the subjects altered their predictions regarding sensory consequences of motor commands, and generalized this learning broadly to neighboring motor commands. In contrast, as the quality of the sensory feedback degraded, adaptation of motor commands became more dependent on reward prediction errors. Reward prediction errors produced comparable changes in the motor commands, but produced no change in the predicted sensory consequences of motor commands, and generalized only locally. Because we found that there was a within subject correlation between generalization patterns and sensory remapping, it is plausible that during adaptation an individual's relative reliance on sensory vs. reward prediction errors could be inferred. We suggest that while motor commands change because of sensory and reward prediction errors, only sensory prediction errors produce a change in the neural system that predicts sensory consequences of motor commands.", "title": "" }, { "docid": "ea646c7d5c04a44e33fefc87818c2a11", "text": "Learning to rank has become an important research topic in machine learning. While most learning-to-rank methods learn the ranking functions by minimizing loss functions, it is the ranking measures (such as NDCG and MAP) that are used to evaluate the performance of the learned ranking functions. In this work, we reveal the relationship between ranking measures and loss functions in learningto-rank methods, such as Ranking SVM, RankBoost, RankNet, and ListMLE. We show that the loss functions of these methods are upper bounds of the measurebased ranking errors. As a result, the minimization of these loss functions will lead to the maximization of the ranking measures. The key to obtaining this result is to model ranking as a sequence of classification tasks, and define a so-called essential loss for ranking as the weighted sum of the classification errors of individual tasks in the sequence. We have proved that the essential loss is both an upper bound of the measure-based ranking errors, and a lower bound of the loss functions in the aforementioned methods. Our proof technique also suggests a way to modify existing loss functions to make them tighter bounds of the measure-based ranking errors. Experimental results on benchmark datasets show that the modifications can lead to better ranking performances, demonstrating the correctness of our theoretical analysis.", "title": "" }, { "docid": "61160371b2a85f1b937105cc43d3c70d", "text": "Regular expressions are extremely useful, because they allow us to work with text in terms of patterns. They are considered the most sophisticated means of performing operations such as string searching, manipulation, validation, and formatting in all applications that deal with text data. Character recognition problem scenarios in sequence analysis that are ideally suited for the application of regular expression algorithms. This paper describes a use of regular expressions in this problem domain, and demonstrates how the effective use of regular expressions that can serve to facilitate more efficient and more effective character recognition.", "title": "" }, { "docid": "f7fc47986046f9d02f9b89f244341123", "text": "Incorporating the body dynamics of compliant robots into their controller architectures can drastically reduce the complexity of locomotion control. An extreme version of this embodied control principle was demonstrated in highly compliant tensegrity robots, for which stable gait generation was achieved by using only optimized linear feedback from the robot's sensors to its actuators. The morphology of quadrupedal robots has previously been used for sensing and for control of a compliant spine, but never for gait generation. In this paper, we successfully apply embodied control to the compliant, quadrupedal Oncilla robot. As initial experiments indicated that mere linear feedback does not suffice, we explore the minimal requirements for robust gait generation in terms of memory and nonlinear complexity. Our results show that a memoryless feedback controller can generate a stable trot by learning the desired nonlinear relation between the input and the output signals. We believe this method can provide a very useful tool for transferring knowledge from open loop to closed loop control on compliant robots.", "title": "" }, { "docid": "7b44c4ec18d01f46fdd513780ba97963", "text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.", "title": "" }, { "docid": "c2e0166a7604836cc33836d1ca86e335", "text": "Owing to the dramatic mobile IP growth, the emerging Internet of Things, and cloud-based applications, wireless networking is witnessing a paradigm shift. By fully exploiting spatial degrees of freedom, massive multiple-input-multiple-output (MIMO) systems promise significant gains in data rates and link reliability. Although the research community has recognized the theoretical benefits of these systems, building the hardware of such complex systems is a challenge in practice. This paper presents a time division duplex (TDD)-based 128-antenna massive MIMO prototype system from theory to reality. First, an analytical signal model is provided to facilitate the setup of a feasible massive MIMO prototype system. Second, a link-level simulation consistent with practical TDDbased massive MIMO systems is conducted to guide and validate the massive MIMO system design. We design and implement the TDDbased 128-antenna massive MIMO prototype system with the guidelines obtained from the link-level simulation. Uplink real-time video transmission and downlink data transmission under the configuration of multiple single-antenna users are achieved. Comparisons with state-of-the-art prototypes demonstrate the advantages of the proposed system in terms of antenna number, bandwidth, latency, and throughput. The proposed system is also equipped with scalability, which makes the system applicable to a wide range of massive scenarios.", "title": "" }, { "docid": "ee6906550c2f9d294e411688bae5db71", "text": "This position paper formalises an abstract model for complex negotiation dialogue. This model is to be used for the benchmark of optimisation algorithms ranging from Reinforcement Learning to Stochastic Games, through Transfer Learning, One-Shot Learning or others.", "title": "" }, { "docid": "6e4dcb451292cc38cb72300a24135c1b", "text": "This survey gives state-of-the-art of genetic algorithm (GA) based clustering techniques. Clustering is a fundamental and widely applied method in understanding and exploring a data set. Interest in clustering has increased recently due to the emergence of several new areas of applications including data mining, bioinformatics, web use data analysis, image analysis etc. To enhance the performance of clustering algorithms, Genetic Algorithms (GAs) is applied to the clustering algorithm. GAs are the best-known evolutionary techniques. The capability of GAs is applied to evolve the proper number of clusters and to provide appropriate clustering. This paper present some existing GA based clustering algorithms and their application to different problems and domains.", "title": "" }, { "docid": "e7773b4aa444ceae84f100af5ac71034", "text": "Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of userdriven footprints (i.e., “checkins”). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Lèvy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services.", "title": "" }, { "docid": "11112e1738bd27f41a5b57f07b71292c", "text": "Rotor-cage fault detection in inverter-fed induction machines is still difficult nowadays as the dynamics introduced by the control or load influence the fault-indicator signals commonly applied. In addition, detection is usually possible only when the machine is operated above a specific load level to generate a significant rotor-current magnitude. This paper proposes a new method of detecting rotor-bar defects at zero load and almost at standstill. The method uses the standard current sensors already present in modern industrial inverters and, hence, is noninvasive. It is thus well suited as a start-up test for drives. By applying an excitation with voltage pulses using the switching of the inverter and then measuring the resulting current slope, a new fault indicator is obtained. As a result, it is possible to clearly identify the fault-induced asymmetry in the machine's transient reactances. Although the transient-flux linkage cannot penetrate the rotor because of the cage, the faulty bar locally influences the zigzag flux, leading to a significant change in the transient reactances. Measurement results show the applicability and sensitivity of the proposed method.", "title": "" }, { "docid": "80d8a8c09e9918981d1a93e5bccf45ba", "text": "In this paper, we study a multi-residential electricity load scheduling problem with multi-class appliances in smart grid. Compared with the previous works in which only limited types of appliances are considered or only single residence grids are considered, we model the grid system more practically with jointly considering multi-residence and multi-class appliance. We formulate an optimization problem to maximize the sum of the overall satisfaction levels of residences which is defined as the sum of utilities of the residential customers minus the total cost for energy consumption. Then, we provide an electricity load scheduling algorithm by using a PL-Generalized Benders Algorithm which operates in a distributed manner while protecting the private information of the residences. By applying the algorithm, we can obtain the near-optimal load scheduling for each residence, which is shown to be very close to the optimal scheduling, and also obtain the lower and upper bounds on the optimal sum of the overall satisfaction levels of all residences, which are shown to be very tight.", "title": "" }, { "docid": "f15f72e8b513b0a9b7ddb9b73a559571", "text": "Teenagers are among the most prolific users of social network sites (SNS). Emerging studies find that youth spend a considerable portion of their daily life interacting through social media. Subsequently, questions and controversies emerge about the effects SNS have on adolescent development. This review outlines the theoretical frameworks researchers have used to understand adolescents and SNS. It brings together work from disparate fields that examine the relationship between SNS and social capital, privacy, youth safety, psychological well-being, and educational achievement.These research strands speak to high-profile concerns and controversies that surround youth participation in these online communities, and offer ripe areas for future research.", "title": "" }, { "docid": "eb32ce661a0d074ce90861793a2e4de7", "text": "A new transfer function from control voltage to duty cycle, the closed-current loop, which captures the natural sampling effect is used to design a controller for the voltage-loop of a pulsewidth modulated (PWM) dc-dc converter operating in continuous-conduction mode (CCM) with peak current-mode control (PCM). This paper derives the voltage loop gain and the closed-loop transfer function from reference voltage to output voltage. The closed-loop transfer function from the input voltage to the output voltage, or the closed-loop audio-susceptibility is derived. The closed-loop transfer function from output current to output voltage, or the closed loop output impedance is also derived. The derivation is performed using an averaged small-signal model of the example boost converter for CCM. Experimental verification is presented. The theoretical and experimental results were in good agreement, confirming the validity of the transfer functions derived.", "title": "" }, { "docid": "3d7406edd98fbdf6587076f88b191569", "text": "I am the very model of a modern Major-General, I've information vegetable, animal, and mineral, I know the kings of England, and I quote the fights historical From Marathon to Waterloo, in order categorical... Imagine that you are an analyst with an investment firm that tracks airline stocks. You're given the task of determining the relationship (if any) between airline announcements of fare increases and the behavior of their stocks the next day. Historical data about stock prices is easy to come by, but what about the airline an-nouncements? You will need to know at least the name of the airline, the nature of the proposed fare hike, the dates of the announcement, and possibly the response of other airlines. Fortunately, these can be all found in news articles like this one: Citing high fuel prices, United Airlines said Friday it has increased fares by $6 per round trip on flights to some cities also served by lower-cost carriers. American Airlines, a unit of AMR Corp., immediately matched the move, spokesman Tim Wagner said. United, a unit of UAL Corp., said the increase took effect Thursday and applies to most routes where it competes against discount carriers, such as Chicago to Dallas and Denver to San Francisco. This chapter presents techniques for extracting limited kinds of semantic content from text. This process of information extraction (IE), turns the unstructured information extraction information embedded in texts into structured data, for example for populating a relational database to enable further processing. The first step in most IE tasks is to find the proper names or named entities mentioned in a text. The task of named entity recognition (NER) is to find each named entity recognition mention of a named entity in the text and label its type. What constitutes a named entity type is application specific; these commonly include people, places, and organizations but also more specific entities from the names of genes and proteins (Cohen and Demner-Fushman, 2014) to the names of college courses (McCallum, 2005). Having located all of the mentions of named entities in a text, it is useful to link, or cluster, these mentions into sets that correspond to the entities behind the mentions, for example inferring that mentions of United Airlines and United in the sample text refer to the same real-world entity. We'll defer discussion of this task of coreference resolution until Chapter 23. The task …", "title": "" }, { "docid": "6241cb482e386435be2e33caf8d94216", "text": "A fog radio access network (F-RAN) is studied, in which $K_T$ edge nodes (ENs) connected to a cloud server via orthogonal fronthaul links, serve $K_R$ users through a wireless Gaussian interference channel. Both the ENs and the users have finite-capacity cache memories, which are filled before the user demands are revealed. While a centralized placement phase is used for the ENs, which model static base stations, a decentralized placement is leveraged for the mobile users. An achievable transmission scheme is presented, which employs a combination of interference alignment, zero-forcing and interference cancellation techniques in the delivery phase, and the \\textit{normalized delivery time} (NDT), which captures the worst-case latency, is analyzed.", "title": "" }, { "docid": "900d9747114db774abcb26bb01b8a89e", "text": "Social-networking functions are increasingly embed ded in online rating systems. These functions alter the rating context in which c onsumer ratings are generated. In this paper, we empirically investigate online friends’ s ocial influence in online book ratings. Our quasi-experiment research design exploits the t emporal sequence of social-networking events and ratings and offers a n ew method for identifying social influence while accounting for the homophily effect . We find rating similarity between friends is significantly higher after the formation f the friend relationships, indicating that with social-networking functions, online ratin g contributors are socially nudged when giving their ratings. Additional exploration o f contingent factors suggests that social influence is stronger for older books and us ers who have smaller networks, and relatively more recent and extremely negative ratin gs cast more salient influence. Our study suggests that friends’ social influence is an important consideration when introducing social-networking functions to online r ating systems.", "title": "" }, { "docid": "c20393a25f4e53be6df2bd49abf6635f", "text": "This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG is composed of two subtasks: Action Mining (AM) and Actionable Knowledge Graph Generation (AKGG). Both subtasks are focused on English language. 9 runs have been submitted by 4 teams for the task. In this paper we describe both the subtasks, datasets, evaluation methods and the results of meta analyses.", "title": "" }, { "docid": "8c28bfbbd2de24340b56f634d982c1ed", "text": "The public perception of shared goods has changed substantially in the past few years. While co-owning properties has been widely accepted for a while (e.g., timeshares), the notion of sharing bikes, cars, or even rides on an on-demand basis is just now starting to gain widespread popularity. The emerging “sharing economy” is particularly interesting in the context of cities that struggle with population growth and increasing density. While sharing vehicles promises to reduce inner-city traffic, congestion, and pollution problems, the associated business models are not without problems themselves. Using agency theory, in this article we discuss existing shared mobility business models in an effort to unveil the optimal relationship between service providers (agents) and the local governments (principals) to achieve the common objective of sustainable mobility. Our findings show private or public models are fraught with conflicts, and point to a merit model as the most promising alignment of the strengths of agents and principals.", "title": "" }, { "docid": "c90ab409ea2a9726f6ddded45e0fdea9", "text": "About a decade ago, the Adult Attachment Interview (AAI; C. George, N. Kaplan, & M. Main, 1985) was developed to explore parents' mental representations of attachment as manifested in language during discourse of childhood experiences. The AAI was intended to predict the quality of the infant-parent attachment relationship, as observed in the Ainsworth Strange Situation, and to predict parents' responsiveness to their infants' attachment signals. The current meta-analysis examined the available evidence with respect to these predictive validity issues. In regard to the 1st issue, the 18 available samples (N = 854) showed a combined effect size of 1.06 in the expected direction for the secure vs. insecure split. For a portion of the studies, the percentage of correspondence between parents' mental representation of attachment and infants' attachment security could be computed (the resulting percentage was 75%; kappa = .49, n = 661). Concerning the 2nd issue, the 10 samples (N = 389) that were retrieved showed a combined effect size of .72 in the expected direction. According to conventional criteria, the effect sizes are large. It was concluded that although the predictive validity of the AAI is a replicated fact, there is only partial knowledge of how attachment representations are transmitted (the transmission gap).", "title": "" } ]
scidocsrr
622895257def22d667cfb3345851049e
What constitutes “ style ” in authorship attribution ?
[ { "docid": "1c068cfb1a801a89ca87a1ac1c279c97", "text": "The analysis of authorial style, termed stylometry, assumes that style is quantifiably measurable for evaluation of distinctive qualities. Stylometry research has yielded several methods and tools over the past 200 years to handle a variety of challenging cases. This survey reviews several articles within five prominent subtasks: authorship attribution, authorship verification, authorship profiling, stylochronometry, and adversarial stylometry. Discussions on datasets, features, experimental techniques, and recent approaches are provided. Further, a current research challenge lies in the inability of authorship analysis techniques to scale to a large number of authors with few text samples. Here, we perform an extensive performance analysis on a corpus of 1,000 authors to investigate authorship attribution, verification, and clustering using 14 algorithms from the literature. Finally, several remaining research challenges are discussed, along with descriptions of various open-source and commercial software that may be useful for stylometry subtasks.", "title": "" } ]
[ { "docid": "fcb55b123a93c046f34ea8faf64e240d", "text": "Organic food is an achievement of agricultural scientific and technological innovations, in order to ensure the application of agricultural science and technology and innovation from organic agricultural producers and participants benefit. This study employed the decomposed theory of planned behavior to analyze the antecedent factors that influence consumer purchase of organic food by deconstruction of the constructs related to attitude, the subjective norm, and the perceived behavioral control. Based on these influencing effects, the study aimed to provide marketing suggestions to popularize organic food. In total, 441 effective questionnaires were collected. The data was analyzed using the structural equation model. The study results indicate that consumers with ethical consciousness will consider environmental protection when they make food purchasing decisions, and have a comparatively positive attitude towards the purchase of organic food. One of the main motives for purchasing organic food is the requirement that food ingredients are safe and natural. Additionally, with respect to external influences, consumers tend to trust information delivered by TV media, experts, and Internet word-of-mouth. Nonetheless, consumers with high or low involvement obviously have significant differences in the relationships between facilitating conditions and perceived behavioral control. Therefore, intensifying the environmental resources of organic food will enhance the purchasing intention of consumers who have low involvement.", "title": "" }, { "docid": "e9c27bfdbe5ca74dd8f451a0916e4dfa", "text": "BACKGROUND\nSuicidal ideation and suicide attempts are serious but not rare conditions in adolescents. However, there are several research and practical suicide-prevention initiatives that discuss the possibility of preventing serious self-harm. Profound knowledge about risk and protective factors is therefore necessary. The aim of this study is a) to clarify the role of parenting behavior and parenting styles in adolescents' suicide attempts and b) to identify other statistically significant and clinically relevant risk and protective factors for suicide attempts in a representative sample of German adolescents.\n\n\nMETHODS\nIn the years 2007/2008, a representative written survey of N = 44,610 students in the 9th grade of different school types in Germany was conducted. In this survey, the lifetime prevalence of suicide attempts was investigated as well as potential predictors including parenting behavior. A three-step statistical analysis was carried out: I) As basic model, the association between parenting and suicide attempts was explored via binary logistic regression controlled for age and sex. II) The predictive values of 13 additional potential risk/protective factors were analyzed with single binary logistic regression analyses for each predictor alone. Non-significant predictors were excluded in Step III. III) In a multivariate binary logistic regression analysis, all significant predictor variables from Step II and the parenting styles were included after testing for multicollinearity.\n\n\nRESULTS\nThree parental variables showed a relevant association with suicide attempts in adolescents - (all protective): mother's warmth and father's warmth in childhood and mother's control in adolescence (Step I). In the full model (Step III), Authoritative parenting (protective: OR: .79) and Rejecting-Neglecting parenting (risk: OR: 1.63) were identified as significant predictors (p < .001) for suicidal attempts. Seven further variables were interpreted to be statistically significant and clinically relevant: ADHD, female sex, smoking, Binge Drinking, absenteeism/truancy, migration background, and parental separation events.\n\n\nCONCLUSIONS\nParenting style does matter. While children of Authoritative parents profit, children of Rejecting-Neglecting parents are put at risk - as we were able to show for suicide attempts in adolescence. Some of the identified risk factors contribute new knowledge and potential areas of intervention for special groups such as migrants or children diagnosed with ADHD.", "title": "" }, { "docid": "085155ebfd2ac60ed65293129cb0bfee", "text": "Today, Convolution Neural Networks (CNN) is adopted by various application areas such as computer vision, speech recognition, and natural language processing. Due to a massive amount of computing for CNN, CNN running on an embedded platform may not meet the performance requirement. In this paper, we propose a system-on-chip (SoC) CNN architecture synthesized by high level synthesis (HLS). HLS is an effective hardware (HW) synthesis method in terms of both development effort and performance. However, the implementation should be optimized carefully in order to achieve a satisfactory performance. Thus, we apply several optimization techniques to the proposed CNN architecture to satisfy the performance requirement. The proposed CNN architecture implemented on a Xilinx's Zynq platform has achieved 23% faster and 9.05 times better throughput per energy consumption than an implementation on an Intel i7 Core processor.", "title": "" }, { "docid": "eb56599f1c41563e7d8d9951f6dba061", "text": "Tracking whole-body human pose in physical human-machine interactions is challenging because of highly dimensional human motions and lack of inexpensive, nonintrusive motion sensors in outdoor environment. In this paper, we present a computational scheme to estimate the human whole-body pose with application to bicycle riding using a small set of wearable sensors. The estimation scheme is built on the fusion of gyroscopes, accelerometers, force sensors, and physical rider-bicycle interaction constraints through an extended Kalman filter design. The use of physical rider-bicycle interaction constraints helps not only eliminate the integration drifts of inertial sensor measurements but also reduce the number of the needed wearable sensors for pose estimation. For each set of the upper and the lower limb, only one tri-axial gyroscope is needed to accurately obtain the 3-D pose information. The drift-free, reliable estimation performance is demonstrated through both indoor and outdoor riding experiments.", "title": "" }, { "docid": "1176abf11f866dda3a76ce080df07c05", "text": "Google Flu Trends can detect regional outbreaks of influenza 7-10 days before conventional Centers for Disease Control and Prevention surveillance systems. We describe the Google Trends tool, explain how the data are processed, present examples, and discuss its strengths and limitations. Google Trends shows great promise as a timely, robust, and sensitive surveillance system. It is best used for surveillance of epidemics and diseases with high prevalences and is currently better suited to track disease activity in developed countries, because to be most effective, it requires large populations of Web search users. Spikes in search volume are currently hard to interpret but have the benefit of increasing vigilance. Google should work with public health care practitioners to develop specialized tools, using Google Flu Trends as a blueprint, to track infectious diseases. Suitable Web search query proxies for diseases need to be established for specialized tools or syndromic surveillance. This unique and innovative technology takes us one step closer to true real-time outbreak surveillance.", "title": "" }, { "docid": "75ef3706a44edf1a96bcb0ce79b07761", "text": "Bag-of-words (BOW), which represents an image by the histogram of local patches on the basis of a visual vocabulary, has attracted intensive attention in visual categorization due to its good performance and flexibility. Conventional BOW neglects the contextual relations between local patches due to its Naïve Bayesian assumption. However, it is well known that contextual relations play an important role for human beings to recognize visual categories from their local appearance. This paper proposes a novel contextual bag-of-words (CBOW) representation to model two kinds of typical contextual relations between local patches, i.e., a semantic conceptual relation and a spatial neighboring relation. To model the semantic conceptual relation, visual words are grouped on multiple semantic levels according to the similarity of class distribution induced by them, accordingly local patches are encoded and images are represented. To explore the spatial neighboring relation, an automatic term extraction technique is adopted to measure the confidence that neighboring visual words are relevant. Word groups with high relevance are used and their statistics are incorporated into the BOW representation. Classification is taken using the support vector machine with an efficient kernel to incorporate the relational information. The proposed approach is extensively evaluated on two kinds of visual categorization tasks, i.e., video event and scene categorization. Experimental results demonstrate the importance of contextual relations of local patches and the CBOW shows superior performance to conventional BOW.", "title": "" }, { "docid": "bcda77a0de7423a2a4331ff87ce9e969", "text": "Because of the increasingly competitive nature of the computer manufacturing industry, Compaq Computer Corporation has made some trend-setting changes in the way it does business. One of these changes is the extension of Compaq's call-logging sy ste problem-resolution component that assists customer support personnel in determining the resolution to a customer's questions and problems. Recently, Compaq extended its customer service to provide not only dealer support but also direct end user support; it is also accepting ownership of any Compaq customer's problems in a Banyan, Mi-crosoft, Novell, or SCO UNIX operating environment. One of the tools that makes this feat possible is SMART (support management automated reasoning technology). SMART is part of a Compaq strategy to increase the effectiveness of the customer support staff and reduce overall cost to the organization by retaining problem-solving knowledge and making it available to the entire support staff at the point it is needed.", "title": "" }, { "docid": "0d966c39aabe4f51181b1e8cf520cae3", "text": "The deflated surfaces of the alluvial fans in Saheki crater reveal the most detailed 28 record of fan stratigraphy and evolution found, to date, on Mars. During deposition of at least the 29 uppermost 100 m of fan deposits, discharges from the source basin consisted of channelized 30 flows transporting sediment (which we infer to be primarily sandand gravel-sized) as bedload 31 coupled with extensive overbank mud-rich flows depositing planar beds of sand-sized or finer 32 sediment. Flow events are inferred to have been of modest magnitude (probably less than ~60 33 m 3 /s), of short duration, and probably occupied only a few distributaries during any individual 34 flow event. Occasional channel avulsions resulted in the distribution of sediment across the 35 entire fan. A comparison with fine-grained alluvial fans in Chile’s Atacama Desert provides 36 insights into the processes responsible for constructing the Saheki crater fans: sediment is 37 deposited by channelized flows (transporting sand through boulder-sized material) and overbank 38 mudflows (sand size and finer) and wind erosion leaves channels expressed in inverted 39 topographic relief. The most likely source of water was snowmelt released after annual or 40 epochal accumulation of snow in the headwater source basin on the interior crater rim during the 41 Hesperian to Amazonian periods. We infer the Saheki fans to have been constructed by many 42 hundreds of separate flow events, and accumulation of the necessary snow and release of 43 meltwater may have required favorable orbital configurations or transient global warming. 44", "title": "" }, { "docid": "d84bd9aecd5e5a5b744bbdbffddfd65f", "text": "Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver. However, the index construction of these variables could result in their strong correlation, thus preventing rated characters from being plotted accurately. Phase 1 of this study tested the indices of the Godspeed questionnaire as measures of humanlike characters. The results indicate significant and strong correlations among the relevant indices (Bartneck, Kulić, Croft, & Zoghbi, 2009). Phase 2 of this study developed alternative indices with nonsignificant correlations (p > .05) between the proposed y-axis eeriness and x-axis perceived humanness (r = .02). The new humanness and eeriness indices facilitate plotting relations among rated characters of varying human likeness. 2010 Elsevier Ltd. All rights reserved. 1. Plotting emotional responses to humanlike characters Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver (Fig. 1). The graph predicts that more human-looking characters will be perceived as more agreeable up to a point at which they become so human people find their nonhuman imperfections unsettling (MacDorman, Green, Ho, & Koch, 2009; MacDorman & Ishiguro, 2006; Mori, 1970). This dip in appraisal marks the start of the uncanny valley (bukimi no tani in Japanese). As characters near complete human likeness, they rise out of the valley, and people once again feel at ease with them. In essence, a character’s imperfections expose a mismatch between the human qualities that are expected and the nonhuman qualities that instead follow, or vice versa. As an example of things that lie in the uncanny valley, Mori (1970) cites corpses, zombies, mannequins coming to life, and lifelike prosthetic hands. Assuming the uncanny valley exists, what dependent variable is appropriate to represent Mori’s graph? Mori referred to the y-axis as shinwakan, a neologism even in Japanese, which has been variously translated as familiarity, rapport, and comfort level. Bartneck, Kanda, Ishiguro, and Hagita (2009) have proposed using likeability to represent shinwakan, and they applied a likeability index to the evaluation of interactions with Ishiguro’s android double, the Geminoid HI-1. Likeability is virtually synonymous with interpersonal warmth (Asch, 1946; Fiske, Cuddy, & Glick, 2007; Rosenberg, Nelson, & Vivekananthan, 1968), which is also strongly correlated with other important measures, such as comfortability, communality, sociability, and positive (vs. negative) affect (Abele & Wojciszke, 2007; MacDorman, Ough, & Ho, 2007; Mehrabian & Russell, 1974; Sproull, Subramani, Kiesler, Walker, & Waters, 1996; Wojciszke, Abele, & Baryla, 2009). Warmth is the primary dimension of human social perception, accounting for 53% of the variance in perceptions of everyday social behaviors (Fiske, Cuddy, Glick, & Xu, 2002; Fiske et al., 2007; Wojciszke, Bazinska, & Jaworski, 1998). Despite the importance of warmth, this concept misses the essence of the uncanny valley. Mori (1970) refers to negative shinwakan as bukimi, which translates as eeriness. However, eeriness is not the negative anchor of warmth. A person can be cold and disagreeable without being eerie—at least not eerie in the way that an artificial human being is eerie. In addition, the set of negative emotions that predict eeriness (e.g., fear, anxiety, and disgust) are more specific than coldness (Ho, MacDorman, & Pramono, 2008). Thus, shinwakan and bukimi appear to constitute distinct dimensions. Although much has been written on potential benchmarks for anthropomorphic robots (for reviews see Kahn et al., 2007; MacDorman & Cowley, 2006; MacDorman & Kahn, 2007), no indices have been developed and empirically validated for measuring shinwakan or related concepts across a range of humanlike stimuli, such as computer-animated human characters and humanoid robots. The Godspeed questionnaire, compiled by Bartneck, Kulić, Croft, and Zoghbi (2009), includes at least two concepts, anthropomorphism and likeability, that could potentially serve as the xand y-axes of Mori’s graph (Bartneck, Kanda, et al., 2009). Although the 0747-5632/$ see front matter 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2010.05.015 * Corresponding author. Tel.: +1 317 215 7040. E-mail address: kmacdorm@indiana.edu (K.F. MacDorman). URL: http://www.macdorman.com (K.F. MacDorman). Computers in Human Behavior 26 (2010) 1508–1518", "title": "" }, { "docid": "470dccf9f447a90dee70f390be186824", "text": "Because of financial market imperfections, such as those generated by asymmetric information in financial markets, which lead to breakdowns in markets, like that for equity, in which risks are shared, firms act in a risk-averse manner. The resulting macroeconomic model accounts for many widely observed aspects of actual business cycles: (a) cyclical movements in real product wages, (b) cyclical patterns of output and investment including inventories, (c) sensitivity of the economy to small perturbations, and (d) persistence. More downward flexibility in wages and prices may exacerbate the plight of an economy that is in a deep recession.", "title": "" }, { "docid": "84f2072f32d2a29d372eef0f4622ddce", "text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure", "title": "" }, { "docid": "693751cc1d963c63498d56012fe3f8b6", "text": "Automatic license plate recognition (LPR) plays an important role in numerous applications and a number of techniques have been proposed. However, most of them worked under restricted conditions, such as fixed illumination, limited vehicle speed, designated routes, and stationary backgrounds. In this study, as few constraints as possible on the working environment are considered. The proposed LPR technique consists of two main modules: a license plate locating module and a license number identification module. The former characterized by fuzzy disciplines attempts to extract license plates from an input image, while the latter conceptualized in terms of neural subjects aims to identify the number present in a license plate. Experiments have been conducted for the respective modules. In the experiment on locating license plates, 1088 images taken from various scenes and under different conditions were employed. Of which, 23 images have been failed to locate the license plates present in the images; the license plate location rate of success is 97.9%. In the experiment on identifying license number, 1065 images, from which license plates have been successfully located, were used. Of which, 47 images have been failed to identify the numbers of the license plates located in the images; the identification rate of success is 95.6%. Combining the above two rates, the overall rate of success for our LPR algorithm is 93.7%.", "title": "" }, { "docid": "998591070fcfeaa307c5a6c807eabc30", "text": "Efficient vertical mobility is a critical component of tall building development and construction. This paper investigates recent advances in elevator technology and examines their impact on tall building development. It maps out, organizes, and collates complex and scattered information on multiple aspects of elevator design, and presents them in an accessible and non-technical discourse. Importantly, the paper contextualizes recent technological innovations by examining their implementations in recent major projects including One World Trade Center in New York; Shanghai Tower in Shanghai; Burj Khalifa in Dubai; Kingdom Tower in Jeddah, Saudi Arabia; and the green retrofit project of the Empire State Building in New York. Further, the paper discusses future vertical transportation models including a vertical subway concept, a space lift, and electromagnetic levitation technology. As these new technological advancements in elevator design empower architects to create new forms and shapes of large-scale, mixed-use developments, this paper concludes by highlighting the need for interdisciplinary research in incorporating elevators in skyscrapers.", "title": "" }, { "docid": "1dcfd9b82cddb3111df067497febdd8b", "text": "Studies investigating the prevalence of psychiatric disorders among trans individuals have identified elevated rates of psychopathology. Research has also provided conflicting psychiatric outcomes following gender-confirming medical interventions. This review identifies 38 cross-sectional and longitudinal studies describing prevalence rates of psychiatric disorders and psychiatric outcomes, pre- and post-gender-confirming medical interventions, for people with gender dysphoria. It indicates that, although the levels of psychopathology and psychiatric disorders in trans people attending services at the time of assessment are higher than in the cis population, they do improve following gender-confirming medical intervention, in many cases reaching normative values. The main Axis I psychiatric disorders were found to be depression and anxiety disorder. Other major psychiatric disorders, such as schizophrenia and bipolar disorder, were rare and were no more prevalent than in the general population. There was conflicting evidence regarding gender differences: some studies found higher psychopathology in trans women, while others found no differences between gender groups. Although many studies were methodologically weak, and included people at different stages of transition within the same cohort of patients, overall this review indicates that trans people attending transgender health-care services appear to have a higher risk of psychiatric morbidity (that improves following treatment), and thus confirms the vulnerability of this population.", "title": "" }, { "docid": "3e6c4f94570670e13f357a5ceff83ed3", "text": "Day by day more and more devices are getting connected to the Internet and with the advent of the Internet of Things, this rate has had an exponential growth. The lack of security in devices connected to the IoT is making them hot targets for cyber-criminals and strength of botnet attacks have increased drastically. Botnets are the technological backbones of multitudinous attacks including Distributed Denial of Service (DDoS), SPAM, identity theft and organizational spying. The 2016 Dyn cyber attack involved multiple DDoS attacks with an estimated throughput of 1.2 terabits per second; the attack is the largest DDoS attack on record. In this paper, we compare three different techniques for botnet detection with each having its unique use cases. The results of the detection methods were verified using ISCX Intrusion Detection Dataset and the CTU-13 Dataset.", "title": "" }, { "docid": "8858053a805375aba9d8e71acfd7b826", "text": "With the accelerating rate of globalization, business exchanges are carried out cross the border, as a result there is a growing demand for talents professional both in English and Business. We can see that at present Business English courses are offered by many language schools in the aim of meeting the need for Business English talent. Many researchers argue that no differences can be defined between Business English teaching and General English teaching. However, this paper concludes that Business English is different from General English at least in such aspects as in the role of teacher, in course design, in teaching models, etc., thus different teaching methods should be applied in order to realize expected teaching goals.", "title": "" }, { "docid": "6f8f7c855ea717ab79af1d5271710408", "text": "In the competitive and low entrance barrier beauty industry, customer loyalty is a critical factor for business success. Research literature of customer relationship management recommends various factors contributing to customer loyalty in the general setting, however, there are insufficient studies empirically weigh the importance of each critical factor for the beauty industry. This study investigates and ranks empirically the critical factors, which contributes to customer loyalty of Online-to-Offline (O2O) marketing in the beauty industry. Our result shows that customer satisfaction, customer switching costs, customer trust, corporate image and customer value positively influence customer loyalty of O2O marketing and in the order of decreasing importance. Attributes contributing to the five critical factors have also been studied and ranked. Findings of this study can help the beauty industry to develop an effective O2O marketing plan and hence customer loyalty can be enhanced through the process of implementing targeted marketing activities.", "title": "" }, { "docid": "db693b698c2cbc35a4b13c2e4a345f6b", "text": "In this paper, a new design of a loaded cross dipole antennas (LCDA) with an omni-directional radiation pattern in the horizontal plane and broad-band characteristics is investigated. An efficient optimization procedure based on a genetic algorithm is employed to design the LCDA and to determine the parameters working over a 25:1 bandwidth. The simulation results are compared with measurements.", "title": "" }, { "docid": "5f49c93d7007f0f14f1410ce7805b29a", "text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.", "title": "" }, { "docid": "e5b95c0b6f9843ccf81f652c92768f66", "text": "Many visual applications have benefited from the outburst of web images, yet the imprecise and incomplete tags arbitrarily provided by users, as the thorn of the rose, may hamper the performance of retrieval or indexing systems relying on such data. In this paper, we propose a novel locality sensitive low-rank model for image tag completion, which approximates the global nonlinear model with a collection of local linear models. To effectively infuse the idea of locality sensitivity, a simple and effective pre-processing module is designed to learn suitable representation for data partition, and a global consensus regularizer is introduced to mitigate the risk of overfitting. Meanwhile, low-rank matrix factorization is employed as local models, where the local geometry structures are preserved for the low-dimensional representation of both tags and samples. Extensive empirical evaluations conducted on three datasets demonstrate the effectiveness and efficiency of the proposed method, where our method outperforms pervious ones by a large margin.", "title": "" } ]
scidocsrr
affb45576ee4afb4926af345c1ef2f5c
Forensic analysis of encrypted instant messaging applications on Android
[ { "docid": "4e938aed527769ad65d85bba48151d21", "text": "We provide a thorough description of all the artifacts that are generated by the messenger application Telegram on Android OS. We also provide interpretation of messages that are generated and how they relate to one another. Based on the results of digital forensics investigation and analysis in this paper, an analyst/investigator will be able to read, reconstruct and provide chronological explanations of messages which are generated by the user. Using three different smartphone device vendors and Android OS versions as the objects of our experiments, we conducted tests in a forensically sound manner.", "title": "" }, { "docid": "e7a6082f1b6c441ebdde238cc8eb21c2", "text": "We present the forensic analysis of the artifacts generated on Android smartphones by ChatSecure, a secure Instant Messaging application that provides strong encryption for transmitted and locally-stored data to ensure the privacy of its users. We show that ChatSecure stores local copies of both exchanged messages and files into two distinct, AES-256 encrypted databases, and we devise a technique able to decrypt them when the secret passphrase, chosen by the user as the initial step of the encryption process, is known. Furthermore, we show how this passphrase can be identified and extracted from the volatile memory of the device, where it persists for the entire execution of ChatSecure after having been entered by the user, thus allowing one Please, cite as: Cosimo Anglano, Massimo Canonico, Marco Guazzone, “Forensic Analysis of the ChatSecure Instant Messaging Application on Android Smartphones,” Digital Investigation, Volume 19, December 2016, Pages 44–59, DOI: 10.1016/j.diin.2016.10.001 Link to publisher: http://dx.doi.org/10.1016/j.diin.2016.10.001 ∗Corresponding author. Address: viale T. Michel 11, 15121 Alessandria (Italy). Phone: +39 0131 360188. Email addresses: cosimo.anglano@uniupo.it (Cosimo Anglano), massimo.canonico@uniupo.it (Massimo Canonico), marco.guazzone@uniupo.it (Marco Guazzone) Preprint submitted to Digital Investigation October 24, 2016 to carry out decryption even if the passphrase is not revealed by the user. Finally, we discuss how to analyze and correlate the data stored in the databases used by ChatSecure to identify the IM accounts used by the user and his/her buddies to communicate, as well as to reconstruct the chronology and contents of the messages and files that have been exchanged among them. For our study we devise and use an experimental methodology, based on the use of emulated devices, that provides a very high degree of reproducibility of the results, and we validate the results it yields against those obtained from real smartphones.", "title": "" }, { "docid": "5dad207fe80469fe2b80d1f1e967575e", "text": "As the geolocation capabilities of smartphones continue to improve, developers have continued to create more innovative applications that rely on this location information for their primary function. This can be seen with Niantic’s release of Pokémon GO, which is a massively multiplayer online role playing and augmented reality game. This game became immensely popular within just a few days of its release. However, it also had the propensity to be a distraction to drivers resulting in numerous accidents, and was used to as a tool by armed robbers to lure unsuspecting users into secluded areas. This facilitates a need for forensic investigators to be able to analyze the data within the application in order to determine if it may have been involved in these incidents. Because this application is new, limited research has been conducted regarding the artifacts that can be recovered from the application. In this paper, we aim to fill the gaps within the current research by assessing what forensically relevant information may be recovered from the application, and understanding the circumstances behind the creation of this information. Our research focuses primarily on the artifacts generated by the Upsight analytics platform, those contained within the bundles directory, and the Pokémon Go Plus accessory. Moreover, we present our new application specific analysis tool that is capable of extracting forensic artifacts from a backup of the Android application, and presenting them to an investigator in an easily readable format. This analysis tool exceeds the capabilities of UFED Physical Analyzer in processing Pokémon GO application data.", "title": "" } ]
[ { "docid": "1ae16863be5df70d33d4a7f6a685ab17", "text": "Frank Chen • Zvi Drezner • Jennifer K. Ryan • David Simchi-Levi Decision Sciences Department, National University of Singapore, 119260 Singapore Department of MS & IS, California State University, Fullerton, California 92834 School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907 Department of IE & MS, Northwestern University, Evanston, Illinois 60208 fbachen@nus.edu.sg • drezner@exchange.fullerton.edu • jkryan@ecn.purdue.edu • levi@iems.nwu.edu", "title": "" }, { "docid": "538f5c7185a6a045ef2719e35b224181", "text": "Robotics has been widely used in education as a learning tool to attract and motivate students in performing laboratory experiments within the context of mechatronics, electronics, microcomputer, and control. In this paper we propose an implementation of cascaded PID control algorithm for line follower balancing robot. The algorithm is implemented on ADROIT V1 education robot kits. The robot should be able to follow the trajectory given by the circular guideline while maintaining its balance condition. The controller also designed to control the speed of robot movement while tracking the line. To obtain this purpose, there are three controllers that is used in the same time; balancing controller, speed controller and the line following controller. Those three controllers are cascaded to control the movement of the robot that uses two motors as its actuator. From the experiment, the proposed cascaded PID controller shows an acceptable performance for the robot to maintain its balance position while following the circular line with the given speed setpoint.", "title": "" }, { "docid": "8f13dd664f1d74c9684fc4431bcda3da", "text": "The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.", "title": "" }, { "docid": "bf65f2c68808755cfcd13e6cc7d0ccab", "text": "Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.", "title": "" }, { "docid": "8f444ac95ff664e06e1194dd096e4f31", "text": "Entity alignment aims to link entities and their counterparts among multiple knowledge graphs (KGs). Most existing methods typically rely on external information of entities such as Wikipedia links and require costly manual feature construction to complete alignment. In this paper, we present a novel approach for entity alignment via joint knowledge embeddings. Our method jointly encodes both entities and relations of various KGs into a unified low-dimensional semantic space according to a small seed set of aligned entities. During this process, we can align entities according to their semantic distance in this joint semantic space. More specifically, we present an iterative and parameter sharing method to improve alignment performance. Experiment results on realworld datasets show that, as compared to baselines, our method achieves significant improvements on entity alignment, and can further improve knowledge graph completion performance on various KGs with the favor of joint knowledge embeddings.", "title": "" }, { "docid": "453191a57a9282248b0d5b8a85fa4ce0", "text": "The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.", "title": "" }, { "docid": "a45818ee6b078e3b153aae7995558e4f", "text": "The reliability of the transmission of the switching signal of IGBT in a static converter is crucial. In fact, if the switching signals are badly transmitted, the power converter can be short-circuited with dramatic consequences. Thus, the operating of such a system can be stopped with heavy economic consequences, as it is the case for an electric train. Many techniques have been developed to achieve solutions for a safe transmission of switching signals with a good galvanic insulation. In very high-voltage, over 10 kV, an optimal solution is to use optic fibres. This technology is limited by the fibre degradation in high temperature atmosphere. Actually, this problem exists in trains. The common use of the radio frequency transmission (RFT) can be exploited to achieve an original IGBT wireless driver. This solution seems to be interesting because high temperature do not interfere with radio frequency transmission. However, radiated electromagnetic interferences (EMI) are drastically important in such an electrical environment, EMI can disturb the RFT. In order to optimise the transmission of switching signals, we have decided to transmit the signals through the energy supplying link. This last device is constituted by a double galvanic insulation transformer (DGIT). The difficulty is to transmit the energy, which is used for the IGBT driver supply and the switching signals in the same loop wire. The paper will highlight this aspect", "title": "" }, { "docid": "0b1db23ae4767d7653e3198919706e99", "text": "Greenhouse cultivation has evolved from simple covered rows of open-fields crops to highly sophisticated controlled environment agriculture (CEA) facilities that projected the image of plant factories for urban agriculture. The advances and improvements in CEA have promoted the scientific solutions for the efficient production of plants in populated cities and multi-story buildings. Successful deployment of CEA for urban agriculture requires many components and subsystems, as well as the understanding of the external influencing factors that should be systematically considered and integrated. This review is an attempt to highlight some of the most recent advances in greenhouse technology and CEA in order to raise the awareness for technology transfer and adaptation, which is necessary for a successful transition to urban agriculture. This study reviewed several aspects of a high-tech CEA system including improvements in the frame and covering materials, environment perception and data sharing, and advanced microclimate control and energy optimization models. This research highlighted urban agriculture and its derivatives, including vertical farming, rooftop greenhouses and plant factories which are the extensions of CEA and have emerged as a response to the growing population, environmental degradation, and urbanization that are threatening food security. Finally, several opportunities and challenges have been identified in implementing the integrated CEA and vertical farming for urban agriculture.", "title": "" }, { "docid": "1e7b1bbaba8b9f9a1e28db42e18c23bf", "text": "To use their pool of resources efficiently, distributed stream-processing systems push query operators to nodes within the network. Currently, these operators, ranging from simple filters to custom business logic, are placed manually at intermediate nodes along the transmission path to meet application-specific performance goals. Determining placement locations is challenging because network and node conditions change over time and because streams may interact with each other, opening venues for reuse and repositioning of operators. This paper describes a stream-based overlay network (SBON), a layer between a stream-processing system and the physical network that manages operator placement for stream-processing systems. Our design is based on a cost space, an abstract representation of the network and on-going streams, which permits decentralized, large-scale multi-query optimization decisions. We present an evaluation of the SBON approach through simulation, experiments on PlanetLab, and an integration with Borealis, an existing stream-processing engine. Our results show that an SBON consistently improves network utilization, provides low stream latency, and enables dynamic optimization at low engineering cost.", "title": "" }, { "docid": "34d2c2349291bed154ef29f2f5472cb5", "text": "We present a novel algorithm for automatically co-segmenting a set of shapes from a common family into consistent parts. Starting from over-segmentations of shapes, our approach generates the segmentations by grouping the primitive patches of the shapes directly and obtains their correspondences simultaneously. The core of the algorithm is to compute an affinity matrix where each entry encodes the similarity between two patches, which is measured based on the geometric features of patches. Instead of concatenating the different features into one feature descriptor, we formulate co-segmentation into a subspace clustering problem in multiple feature spaces. Specifically, to fuse multiple features, we propose a new formulation of optimization with a consistent penalty, which facilitates both the identification of most similar patches and selection of master features for two similar patches. Therefore the affinity matrices for various features are sparsity-consistent and the similarity between a pair of patches may be determined by part of (instead of all) features. Experimental results have shown how our algorithm jointly extracts consistent parts across the collection in a good manner.", "title": "" }, { "docid": "e13e0a64d9c9ede58590d1cc113fbada", "text": "Background The blood-brain barrier (BBB) has been hypothesized to play a role in migraine since the late 1970s. Despite this, limited investigation of the BBB in migraine has been conducted. We used the inflammatory soup rat model of trigeminal allodynia, which closely mimics chronic migraine, to determine the impact of repeated dural inflammatory stimulation on BBB permeability. Methods The sodium fluorescein BBB permeability assay was used in multiple brain regions (trigeminal nucleus caudalis (TNC), periaqueductal grey, frontal cortex, sub-cortex, and cortex directly below the area of dural activation) during the episodic and chronic stages of repeated inflammatory dural stimulation. Glial activation was assessed in the TNC via GFAP and OX42 immunoreactivity. Minocycline was tested for its ability to prevent BBB disruption and trigeminal sensitivity. Results No astrocyte or microglial activation was found during the episodic stage, but BBB permeability and trigeminal sensitivity were increased. Astrocyte and microglial activation, BBB permeability, and trigeminal sensitivity were increased during the chronic stage. These changes were only found in the TNC. Minocycline treatment prevented BBB permeability modulation and trigeminal sensitivity during the episodic and chronic stages. Discussion Modulation of BBB permeability occurs centrally within the TNC following repeated dural inflammatory stimulation and may play a role in migraine.", "title": "" }, { "docid": "89dd97465c8373bb9dabf3cbb26a4448", "text": "Unidirectional connections from the cortex to the matrix of the corpus striatum initiate the cortico-basal ganglia (BG)-thalamocortical loop, thought to be important in momentary action selection and in longer-term fine tuning of behavioural repertoire; a discrete set of striatal compartments, striosomes, has the complementary role of registering or anticipating reward that shapes corticostriatal plasticity. Re-entrant signals traversing the cortico-BG loop impact predominantly frontal cortices, conveyed through topographically ordered output channels; by contrast, striatal input signals originate from a far broader span of cortex, and are far more divergent in their termination. The term ‘disclosed loop’ is introduced to describe this organisation: a closed circuit that is open to outside influence at the initial stage of cortical input. The closed circuit component of corticostriatal afferents is newly dubbed ‘operative’, as it is proposed to establish the bid for action selection on the part of an incipient cortical action plan; the broader set of converging corticostriatal afferents is described as contextual. A corollary of this proposal is that every unit of the striatal volume, including the long, C-shaped tail of the caudate nucleus, should receive a mandatory component of operative input, and hence include at least one area of BG-recipient cortex amongst the sources of its corticostriatal afferents. Individual operative afferents contact twin classes of GABAergic striatal projection neuron (SPN), distinguished by their neurochemical character, and onward circuitry. This is the basis of the classic direct and indirect pathway model of the cortico-BG loop. Each pathway utilises a serial chain of inhibition, with two such links, or three, providing positive and negative feedback, respectively. Operative co-activation of direct and indirect SPNs is, therefore, pictured to simultaneously promote action, and to restrain it. The balance of this rival activity is determined by the contextual inputs, which summarise the external and internal sensory environment, and the state of ongoing behavioural priorities. Notably, the distributed sources of contextual convergence upon a striatal locus mirror the transcortical network harnessed by the origin of the operative input to that locus, thereby capturing a similar set of contingencies relevant to determining action. The disclosed loop formulation of corticostriatal and subsequent BG loop circuitry, as advanced here, refines the operating rationale of the classic model and allows the integration of more recent anatomical and physiological data, some of which can appear at variance with the classic model. Equally, it provides a lucid functional context for continuing cellular studies of SPN biophysics and mechanisms of synaptic plasticity.", "title": "" }, { "docid": "37d353f5b8f0034209f75a3848580642", "text": "(NR) is the first interactive data repository with a web-based platform for visual interactive analytics. Unlike other data repositories (e.g., UCI ML Data Repository, and SNAP), the network data repository (networkrepository.com) allows users to not only download, but to interactively analyze and visualize such data using our web-based interactive graph analytics platform. Users can in real-time analyze, visualize, compare, and explore data along many different dimensions. The aim of NR is to make it easy to discover key insights into the data extremely fast with little effort while also providing a medium for users to share data, visualizations, and insights. Other key factors that differentiate NR from the current data repositories is the number of graph datasets, their size, and variety. While other data repositories are static, they also lack a means for users to collaboratively discuss a particular dataset, corrections, or challenges with using the data for certain applications. In contrast, NR incorporates many social and collaborative aspects that facilitate scientific research, e.g., users can discuss each graph, post observations, and visualizations.", "title": "" }, { "docid": "08084de7a702b87bd8ffc1d36dbf67ea", "text": "In recent years, the mobile data traffic is increasing and many more frequency bands have been employed in cellular handsets. A simple π type tunable band elimination filter (BEF) with switching function has been developed using a wideband tunable surface acoustic wave (SAW) resonator circuit. The frequency of BEF is tuned approximately 31% by variable capacitors without spurious. In LTE low band, the arrangement of TX and RX frequencies is to be reversed in Band 13, 14 and 20 compared with the other bands. The steep edge slopes of the developed filter can be exchanged according to the resonance condition and switching. With combining the TX and RX tunable BEFs and the small sized broadband circulator, a new tunable duplexer has been fabricated, and its TX-RX isolation is proved to be more than 50dB in LTE low band operations.", "title": "" }, { "docid": "d13145bc68472ed9a06bafd86357c5dd", "text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple yarn-level ambient occlusion approximation and self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.", "title": "" }, { "docid": "a4fb1919a1bf92608a55bc3feedf897d", "text": "We develop an algebraic framework, Logic Programming Doctrines, for the syntax, proof theory, operational semantics and model theory of Horn Clause logic programming based on indexed premonoidal categories. Our aim is to provide a uniform framework for logic programming and its extensions capable of incorporating constraints, abstract data types, features imported from other programming language paradigms and a mathematical description of the state space in a declarative manner. We define a new way to embed information about data into logic programming derivations by building a sketch-like description of data structures directly into an indexed category of proofs. We give an algebraic axiomatization of bottom-up semantics in this general setting, describing categorical models as fixed points of a continuous operator. © 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b14b36728c1775a8469bce1c42ce8783", "text": "Inorganic scintillators are commonly used as sensors for ionizing radiation detectors in a variety of applications, ranging from particle and nuclear physics detectors, medical imaging, nuclear installations radiation control, homeland security, well oil logging and a number of industrial non-destructive investigations. For all these applications, the scintillation light produced by the energy deposited in the scintillator allows the determination of the position, the energy and the time of the event. However, the performance of these detectors is often limited by the amount of light collected on the photodetector. A major limitation comes from the fact that inorganic scintillators are generally characterized by a high refractive index, as a consequence of the required high density to provide the necessary stopping power for ionizing radiation. The index mismatch between the crystal and the surrounding medium (air or optical grease) strongly limits the light extraction efficiency because of total internal reflection (TIR), increasing the travel path and the absorption probability through multiple bouncings of the photons in the crystal. Photonic crystals can overcome this problem and produce a controllable index matching between the crystal and the output medium through an interface made of a thin nano-structured layer of optically-transparent high index material. This review presents a summary of the works aiming at improving the light collection efficiency of scintillators using photonic crystals since this idea was introduced 10 years ago.", "title": "" }, { "docid": "3b78988b74c2e42827c9e75e37d2223e", "text": "This paper addresses how to construct a RBAC-compatible attribute-based encryption (ABE) for secure cloud storage, which provides a user-friendly and easy-to-manage security mechanism without user intervention. Similar to role hierarchy in RBAC, attribute lattice introduced into ABE is used to define a seniority relation among all values of an attribute, whereby a user holding the senior attribute values acquires permissions of their juniors. Based on these notations, we present a new ABE scheme called Attribute-Based Encryption with Attribute Lattice (ABE-AL) that provides an efficient approach to implement comparison operations between attribute values on a poset derived from attribute lattice. By using bilinear groups of composite order, we propose a practical construction of ABE-AL based on forward and backward derivation functions. Compared with prior solutions, our scheme offers a compact policy representation solution, which can significantly reduce the size of privatekeys and ciphertexts. Furthermore, our solution provides a richer expressive power of access policies to facilitate flexible access control for ABE scheme.", "title": "" }, { "docid": "a6fec60aeb6e5824ed07eaa3257969aa", "text": "What aspects of information assurance can be identified in Business-to-Consumer (B-toC) online transactions? The purpose of this research is to build a theoretical framework for studying information assurance based on a detailed analysis of academic literature for online exchanges in B-to-C electronic commerce. Further, a semantic network content analysis is conducted to analyze the representations of information assurance in B-to-C electronic commerce in the real online market place (transaction Web sites of selected Fortune 500 firms). The results show that the transaction websites focus on some perspectives and not on others. For example, we see an emphasis on the importance of technological and consumer behavioral elements of information assurance such as issues of online security and privacy. Further corporate practitioners place most emphasis on transaction-related information assurance issues. Interestingly, the product and institutional dimension of information assurance in online transaction websites are only", "title": "" }, { "docid": "581e3373ecfbc6c012df7c166636cc50", "text": "The deep convolutional neural network(CNN) has significantly raised the performance of image classification and face recognition. Softmax is usually used as supervision, but it only penalizes the classification loss. In this paper, we propose a novel auxiliary supervision signal called contrastive-center loss, which can further enhance the discriminative power of the features, for it learns a class center for each class. The proposed contrastive-center loss simultaneously considers intra-class compactness and inter-class separability, by penalizing the contrastive values between: (1)the distances of training samples to their corresponding class centers, and (2)the sum of the distances of training samples to their non-corresponding class centers. Experiments on different datasets demonstrate the effectiveness of contrastive-center loss.", "title": "" } ]
scidocsrr
74a11b3a1d2219bd9c69465f6b9f0d6a
Client Clustering for Hiring Modeling in Work Marketplaces
[ { "docid": "de7d29c7e11445e836bd04c003443c67", "text": "Logistic regression with `1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale `1-regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warm-start techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.", "title": "" } ]
[ { "docid": "4c2248db49a810d727eac378cf9e3c0f", "text": "Based on Life Cycle Assessment (LCA) and Eco-indicator 99 method, a LCA model was applied to conduct environmental impact and end-of-life treatment policy analysis for secondary batteries. This model evaluated the cycle, recycle and waste treatment stages of secondary batteries. Nickel-Metal Hydride (Ni-MH) batteries and Lithium ion (Li-ion) batteries were chosen as the typical secondary batteries in this study. Through this research, the following results were found: (1) A basic number of cycles should be defined. A minimum cycle number of 200 would result in an obvious decline of environmental loads for both battery types. Batteries with high energy density and long life expectancy have small environmental loads. Products and technology that help increase energy density and life expectancy should be encouraged. (2) Secondary batteries should be sorted out from municipal garbage. Meanwhile, different types of discarded batteries should be treated separately under policies and regulations. (3) The incineration rate has obvious impact on the Eco-indicator points of Nickel-Metal Hydride (Ni-MH) batteries. The influence of recycle rate on Lithium ion (Li-ion) batteries is more obvious. These findings indicate that recycling is the most promising direction for reducing secondary batteries' environmental loads. The model proposed here can be used to evaluate environmental loads of other secondary batteries and it can be useful for proposing policies and countermeasures to reduce the environmental impact of secondary batteries.", "title": "" }, { "docid": "cd0c1507c1187e686c7641388413d3b5", "text": "Inference of three-dimensional motion from the fusion of inertial and visual sensory data has to contend with the preponderance of outliers in the latter. Robust filtering deals with the joint inference and classification task of selecting which data fits the model, and estimating its state. We derive the optimal discriminant and propose several approximations, some used in the literature, others new. We compare them analytically, by pointing to the assumptions underlying their approximations, and empirically. We show that the best performing method improves the performance of state-of-the-art visual-inertial sensor fusion systems, while retaining the same computational complexity.", "title": "" }, { "docid": "a1bd6742011302d35527cdbad73a82a3", "text": "The Semantic Web contains an enormous amount of information in the form of knowledge bases (KB). To make this information available, many question answering (QA) systems over KBs were created in the last years. Building a QA system over KBs is difficult because there are many different challenges to be solved. In order to address these challenges, QA systems generally combine techniques from natural language processing, information retrieval, machine learning and Semantic Web. The aim of this survey is to give an overview of the techniques used in current QA systems over KBs. We present the techniques used by the QA systems which were evaluated on a popular series of benchmarks: Question Answering over Linked Data. Techniques that solve the same task are first grouped together and then described. The advantages and disadvantages are discussed for each technique. This allows a direct comparison of similar techniques. Additionally, we point to techniques that are used over WebQuestions and SimpleQuestions, which are two other popular benchmarks for QA systems.", "title": "" }, { "docid": "9ecf815bfb76760f2166240aee3a6f24", "text": "This paper reviews the current state of power supply technology platforms and highlights future trends and challenges toward realizing fully monolithic power converters. This paper presents a detailed survey of relevant power converter technologies, namely power supply in package and power supply on chip (PwrSoC). The performance of different power converter solutions reported in the literature is benchmarked against existing commercial products. This paper presents a detailed review of integrated magnetics technologies, primarily microinductors, a key component in realizing a monolithic power converter. A detailed review and comparison of different microinductor structures and the magnetic materials used as inductor cores is presented. The deposition techniques for integrating the magnetic materials in the microinductor structures are discussed. This paper proposes the use of two performance metrics or figures of merit in order to compare the dc and ac performance of individual microinductor structures. Finally, the authors discuss future trends, key challenges, and potential solutions in the realization of the “holy grail” of monolithically integrated power supplies (PwrSoC).", "title": "" }, { "docid": "cc1b8f1689c45c53e461dc268c664f53", "text": "This paper presents a one switch silicon carbide JFET normally-ON resonant inverter applied to induction heating for consumer home cookers. The promising characteristics of silicon carbide (SiC) devices need to be verified in practical applications; therefore, the objective of this work is to compare Si IGBTs and normally-ON commercially available JFET in similar operating conditions, with two similar boards. The paper describes the gate circuit implemented, the design of the basic converter in ideal operation, namely Zero Voltage Switching (ZVS) and Zero Derivative Voltage Switching (ZVDS), as well as some preliminary comparative results for 700W and 2 kW output power delivered to an induction heating coil and load.", "title": "" }, { "docid": "e4861d48d54e0c48f241b5adb1a893e6", "text": "With the rapid development of the World Wide Web, electronic word-of-mouth interaction has made consumers active participants. Nowadays, a large number of reviews posted by the consumers on the Web provide valuable information to other consumers. Such information is highly essential for decision making and hence popular among the internet users. This information is very valuable not only for prospective consumers to make decisions but also for businesses in predicting the success and sustainability. In this paper, a Gini Index based feature selection method with Support Vector Machine (SVM) classifier is proposed for sentiment classification for large movie review data set. The results show that our Gini Index method has better classification performance in terms of reduced error rate and accuracy.", "title": "" }, { "docid": "ffa15e86e575d5fdf2ccf0dcafe74a93", "text": "We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O*(√T)regret. The setting is a natural generalization of the nonstochastic multiarmed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of recent papers. We show how the difficulties encountered by previous approaches are overcome by the use of a self-concordant potential function. Our approach presents a novel connection between online learning and interior point methods. Disciplines Statistics and Probability Comments At the time of publication, author Alexander Rakhlin was affiliated with the University of California, Berkeley. Currently, he is a faculty member at the Statistics Department at the University of Pennsylvania. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/statistics_papers/110 Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization Jacob Abernethy Computer Science Division UC Berkeley jake@cs.berkeley.edu (eligible for best student paper award) Elad Hazan IBM Almaden hazan@us.ibm.com Alexander Rakhlin Computer Science Division UC Berkeley rakhlin@cs.berkeley.edu", "title": "" }, { "docid": "96516274e1eb8b9c53296a935f67ca2a", "text": "Recurrent neural networks that are <italic>trained</italic> to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidel discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can <italic>construct</italic> second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, that is, the constructed network correctly classifies strings of <italic>arbitrary length</italic>. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with <italic>n</italic> state and <italic>m</italic>input alphabet symbols, the constructive algorithm generates a “programmed” neural network with <italic>O</italic>(<italic>n</italic>) neurons and <italic>O</italic>(<italic>mn</italic>) weights. We compare our algorithm to other methods proposed in the literature.", "title": "" }, { "docid": "c736258623c7f977ebc00f5555d13e02", "text": "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-system rules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols of an L-system alphabet. The terminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.", "title": "" }, { "docid": "67ca6efda7f90024cc9ae50ebb4181b7", "text": "Nowadays data growth is directly proportional to time and it is a major challenge to store the data in an organised fashion. Document clustering is the solution for organising relevant documents together. In this paper, a web clustering algorithm namely WDC-KABC is proposed to cluster the web documents effectively. The proposed algorithm uses the features of both K-means and Artificial Bee Colony (ABC) clustering algorithm. In this paper, ABC algorithm is employed as the global search optimizer and K-means is used for refining the solutions. Thus, the quality of the cluster is improved. The performance of WDC-KABC is analysed with four different datasets (webkb, wap, rec0 and 7sectors). The proposed algorithm is compared with existing algorithms such as K-means, Particle Swarm Optimization, Hybrid of Particle Swarm Optimization and K-means and Ant Colony Optimization. The experimental results of WDC-KABC are satisfactory, in terms of precision, recall, f-measure, accuracy and error rate.", "title": "" }, { "docid": "ed2a2ede0be8581c0d719e247a2f1d96", "text": "Beginning Application Lifecycle Management is a guide to an area of rapidly growing interest within the development community: managing the entire cycle of building software. ALM is an area that spans everything from requirements specifications to retirement of an IT-system or application. Because its techniques allow you to deal with the process of developing applications across many areas of responsibility and across many different disciplines, the benefits and effects of ALM techniques used on your project can be wide-ranging and pronounced. In this book, author Joachim Rossberg will show you what ALM is and why it matters. He will also show you how you can assess your current situation and how you can use this assessment to create the road ahead for improving or implementing your own ALM process across all of your team’s development efforts. Beginning Application Lifecycle Management can be implemented on any platform. This book uses Microsoft Team Foundation Server as a foundation in many examples, but the key elements are platform independent and you’ll find the book written in a platform agnostic way.", "title": "" }, { "docid": "e4546038f0102d0faac18ac96e50793d", "text": "Ontologies have been increasingly used as a core representation formalism in medical information systems. Diagnosis is one of the highly relevant reasoning problems in this domain. In recent years this problem has captured attention also in the description logics community and various proposals on formalising abductive reasoning problems and their computational support appeared. In this paper, we focus on a practical diagnostic problem from a medical domain – the diagnosis of diabetes mellitus – and we try to formalize it in DL in such a way that the expected diagnoses are abductively derived. Our aim in this work is to analyze abductive reasoning in DL from a practical perspective, considering more complex cases than trivial examples typically considered by the theoryor algorithm-centered literature, and to evaluate the expressivity as well as the particular formulation of the abductive reasoning problem needed to capture medical diagnosis.", "title": "" }, { "docid": "9aad2d4dd17bb3906add18578df28580", "text": "Likelihood ratio policy gradient methods have been some of the most successful reinforcement learning algorithms, especially for learning on physical systems. We describe how the likelihood ratio policy gradient can be derived from an importance sampling perspective. This derivation highlights how likelihood ratio methods under-use past experience by (i) using the past experience to estimate only the gradient of the expected return U(θ) at the current policy parameterization θ, rather than to obtain a more complete estimate of U(θ), and (ii) using past experience under the current policy only rather than using all past experience to improve the estimates. We present a new policy search method, which leverages both of these observations as well as generalized baselines—a new technique which generalizes commonly used baseline techniques for policy gradient methods. Our algorithm outperforms standard likelihood ratio policy gradient algorithms on several testbeds.", "title": "" }, { "docid": "ec6e955f3f79ef1706fc6b9b16326370", "text": "Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in the recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of data for training. In this paper, we develop a photo-realistic simulator that can afford the generation of large amounts of training data (both images rendered from the UAV camera and its controls) to teach a UAV to autonomously race through challenging tracks. We train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing. Training is done through imitation learning enabled by data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.", "title": "" }, { "docid": "c450ac5c84d962bb7f2262cf48e1280a", "text": "Animal-assisted therapies have become widespread with programs targeting a variety of pathologies and populations. Despite its popularity, it is unclear if this therapy is useful. The aim of this systematic review is to establish the efficacy of Animal assisted therapies in the management of dementia, depression and other conditions in adult population. A search was conducted in MEDLINE, EMBASE, CINAHL, LILACS, ScienceDirect, and Taylor and Francis, OpenGrey, GreyLiteratureReport, ProQuest, and DIALNET. No language or study type filters were applied. Conditions studied included depression, dementia, multiple sclerosis, PTSD, stroke, spinal cord injury, and schizophrenia. Only articles published after the year 2000 using therapies with significant animal involvement were included. 23 articles and dissertations met inclusion criteria. Overall quality was low. The degree of animal interaction significantly influenced outcomes. Results are generally favorable, but more thorough and standardized research should be done to strengthen the existing evidence.", "title": "" }, { "docid": "c78c4dc2475c0fa382c2233c064efe4d", "text": "We give the first algorithm for kernel Nyström approximation that runs in linear time in the number of training points and is provably accurate for all kernel matrices, without dependence on regularity or incoherence conditions. The algorithm projects the kernel onto a set of s landmark points sampled by their ridge leverage scores, requiring just O(ns) kernel evaluations and O(ns2) additional runtime. While leverage score sampling has long been known to give strong theoretical guarantees for Nyström approximation, by employing a fast recursive sampling scheme, our algorithm is the first to make the approach scalable. Empirically we show that it finds more accurate kernel approximations in less time than popular techniques such as classic Nyström approximation and the random Fourier features method.", "title": "" }, { "docid": "90eeae710c92da9dd129647488b604c7", "text": "Finding information is becoming a major part of our daily life. Entire sectors, from Web users to scientists and intelligence analysts, are increasingly struggling to keep up with the larger and larger amounts of content published every day. With this much data, it is often easy to miss the big picture.\n In this article, we investigate methods for automatically connecting the dots---providing a structured, easy way to navigate within a new topic and discover hidden connections. We focus on the news domain: given two news articles, our system automatically finds a coherent chain linking them together. For example, it can recover the chain of events starting with the decline of home prices (January 2007), and ending with the health care debate (2009).\n We formalize the characteristics of a good chain and provide a fast search-driven algorithm to connect two fixed endpoints. We incorporate user feedback into our framework, allowing the stories to be refined and personalized. We also provide a method to handle partially-specified endpoints, for users who do not know both ends of a story. Finally, we evaluate our algorithm over real news data. Our user studies demonstrate that the objective we propose captures the users’ intuitive notion of coherence, and that our algorithm effectively helps users understand the news.", "title": "" }, { "docid": "497d72ce075f9bbcb2464c9ab20e28de", "text": "Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon.", "title": "" }, { "docid": "ef9a51b5b3a4bcab7867819070801e8a", "text": "For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the \"file drawer problem\" is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed.", "title": "" }, { "docid": "dc54b73eb740bc1bbdf1b834a7c40127", "text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.", "title": "" } ]
scidocsrr
924cf1a9602fb8f8f873493a39dcd967
2 8 Ja n 20 18 HONE : Higher-Order Network Embeddings
[ { "docid": "738f60fbfe177eec52057c8e5ab43e55", "text": "From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains.", "title": "" }, { "docid": "2f3e10724dca50927bd1a39cfd1f45e5", "text": "Many recommendation systems suggest items to users by utilizing the techniques of collaborative filtering (CF) based on historical records of items that the users have viewed, purchased, or rated. Two major problems that most CF approaches have to resolve are scalability and sparseness of the user profiles. In this paper, we describe Alternating-Least-Squares with Weighted-λ-Regularization (ALS-WR), a parallel algorithm that we designed for the Netflix Prize, a large-scale collaborative filtering challenge. We use parallel Matlab on a Linux cluster as the experimental platform. We show empirically that the performance of ALS-WR monotonically increases with both the number of features and the number of ALS iterations. Our ALS-WR applied to the Netflix dataset with 1000 hidden features obtained a RMSE score of 0.8985, which is one of the best results based on a pure method. Combined with the parallel version of other known methods, we achieved a performance improvement of 5.91% over Netflix’s own CineMatch recommendation system. Our method is simple and scales well to very large datasets.", "title": "" }, { "docid": "37d353f5b8f0034209f75a3848580642", "text": "(NR) is the first interactive data repository with a web-based platform for visual interactive analytics. Unlike other data repositories (e.g., UCI ML Data Repository, and SNAP), the network data repository (networkrepository.com) allows users to not only download, but to interactively analyze and visualize such data using our web-based interactive graph analytics platform. Users can in real-time analyze, visualize, compare, and explore data along many different dimensions. The aim of NR is to make it easy to discover key insights into the data extremely fast with little effort while also providing a medium for users to share data, visualizations, and insights. Other key factors that differentiate NR from the current data repositories is the number of graph datasets, their size, and variety. While other data repositories are static, they also lack a means for users to collaboratively discuss a particular dataset, corrections, or challenges with using the data for certain applications. In contrast, NR incorporates many social and collaborative aspects that facilitate scientific research, e.g., users can discuss each graph, post observations, and visualizations.", "title": "" } ]
[ { "docid": "fdd94d3d9df0171e41179336bd282bdd", "text": "The authors propose a reinforcement-learning mechanism as a model for recurrent choice and extend it to account for skill learning. The model was inspired by recent research in neurophysiological studies of the basal ganglia and provides an integrated explanation of recurrent choice behavior and skill learning. The behavior includes effects of differential probabilities, magnitudes, variabilities, and delay of reinforcement. The model can also produce the violation of independence, preference reversals, and the goal gradient of reinforcement in maze learning. An experiment was conducted to study learning of action sequences in a multistep task. The fit of the model to the data demonstrated its ability to account for complex skill learning. The advantages of incorporating the mechanism into a larger cognitive architecture are discussed.", "title": "" }, { "docid": "0c7b5a51a0698f261d147b2aa77acc83", "text": "The extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer vision research. To tackle these challenges, we present a social media image processing pipeline that combines human and machine intelligence to perform two important tasks: (i) capturing and filtering of social media imagery content (i.e., real-time image streaming, de-duplication, and relevancy filtering); and (ii) actionable information extraction (i.e., damage severity assessment) as a core situational awareness task during an on-going crisis event. Results obtained from extensive experiments on real-world crisis datasets demonstrate the significance of the proposed pipeline for optimal utilization of both human and machine computing resources.", "title": "" }, { "docid": "55b76ecbc7c994f095b0c45cb6ae034c", "text": "of greenhouse gases in the atmosphere (IPCC, 2001), and ability of our agricultural systems to sustain producSociety is facing three related issues: overreliance on imported fuel, tion at rates needed to feed a growing world population increasing levels of greenhouse gases in the atmosphere, and producing sufficient food for a growing world population. The U.S. De(Cassman, 1999). Many papers have been written on partment of Energy and private enterprise are developing technology these topics both individually and in the various combinecessary to use high-cellulose feedstock, such as crop residues, for nations (Doran, 2002; Follett, 2001; Janzen et al., 1998a, ethanol production. Corn (Zea mays L.) residue can provide about 1998b; Lal et al., 1999). However, few authors have ad1.7 times more C than barley (Hordeum vulgare L.), oat (Avena sativa dressed all three topics together. L.), sorghum [Sorghum bicolor (L.) Moench], soybean [Glycine max Recent developments in the energy industry and ac(L.) Merr.], sunflower (Helianthus annuus L.), and wheat (Triticum tivity by entrepreneurs have prompted new strategies aestivum L.) residues based on production levels. Removal of crop for addressing the first issue, overreliance on imported residue from the field must be balanced against impacting the environfuels (Hettenhaus et al., 2000). This strategy expands use ment (soil erosion), maintaining soil organic matter levels, and preof biomass for fuel production and is contingent on deserving or enhancing productivity. Our objective is to summarize published works for potential impacts of wide-scale, corn stover collection velopment of new organisms or enzymes to convert on corn production capacity in Corn Belt soils. We address the issue of cellulosic (a high concentration of cellulose) biomass crop yield (sustainability) and related soil processes directly. However, [opposed to grain (starchy) biomass] to ethanol for use scarcity of data requires us to deal with the issue of greenhouse gases as a motor vehicle fuel. The U.S. DOE, in concert with indirectly and by inference. All ramifications of new management pracprivate enterprise, is making great strides toward develtices and crop uses must be explored and evaluated fully before an oping enzymes and improving efficiency in fuel producindustry is established. Our conclusion is that within limits, corn stover tion from biomass (DiPardo, 2000; Hettenhaus et al., can be harvested for ethanol production to provide a renewable, do2000). mestic source of energy that reduces greenhouse gases. RecommendaSources of cellulosic biomass are numerous (woody biotion for removal rates will vary based on regional yield, climatic mass crops and lumber industry wastes, forage crops, inconditions, and cultural practices. Agronomists are challenged to develop a procedure (tool) for recommending maximum permissible dustrial and municipal wastes, animal manure, and crop removal rates that ensure sustained soil productivity. residues); however, currently few sources are perceived to be available in sufficient quantity and quality to support development of an economically sized processing facility of about 1800 Mg dry matter d 1 (Hettenhaus T of the most pressing issues facing our society, et al., 2000), except crop residues (DiPardo, 2000). Bain the midterm, are overreliance on imported fuels gasse [remaining after sap extraction from sugarcane [U.S. Department of Energy (DOE) Office of Energy Ef(Saccharum officinarum L.)] in Louisiana and rice (Orficiency and Renewable Energy, 2002], increasing levels yza sativa L.) straw in California are regional examples of crop residues collected in current culture and availW.W. Wilhelm, USDA-ARS, 120 Keim Hall, Univ. of Nebraska, Linable for production of ethanol (DiPardo, 2000). Creatcoln, NE 68583-0934; J.M.F. Johnson, USDA-ARS, 803 Iowa Ave., ing an acceptable use or disposal procedure for these Morris, MN 56267-1065; J.L. Hatfield, 108 Natl. Soil Tilth Lab., 2150 residues represents a huge problem in the regions where Pammel Drive, Ames, IA 50011-3120; W.B. Voorhees, USDA-ARS (retired), 803 Iowa Ave., Morris, MN 56267-1065; and D.R. Linden, they are produced although the total quantity is not USDA-ARS (retired), 1991 Upper Buford Circle, St. Paul, MN 55108sufficient to have a great impact on fuel needs for the 0000. This paper is a joint contribution of the USDA-ARS and the nation (DiPardo, 2000). On the other hand, the quantity Agricultural Research Division of the University of Nebraska. Pubof corn stover is large, but corn stover is generally not lished as Journal Ser. no. 13949. Received 12 Dec. 2002. *Corresponding author (wwilhelm1@unl.edu). Abbreviations: 13C, change in 13C atom percent; DOE, Department Published in Agron. J. 96:1–17 (2004).  American Society of Agronomy of Energy; HI, harvest index; SOC, soil organic carbon; SOM, soil organic matter. 677 S. Segoe Rd., Madison, WI 53711 USA", "title": "" }, { "docid": "f1e858974e84dfa8cb518e9f4f55d812", "text": "To achieve peak performance of an algorithm (in particular for problems in AI), algorithm configuration is often necessary to determine a well-performing parameter configuration. So far, most studies in algorithm configuration focused on proposing better algorithm configuration procedures or on improving a particular algorithm’s performance. In contrast, we use all the collected empirical performance data gathered during algorithm configuration runs to generate extensive insights into an algorithm, given problem instances and the used configurator. To this end, we provide a tool, called CAVE , that automatically generates comprehensive reports and insightful figures from all available empirical data. CAVE aims to help algorithm and configurator developers to better understand their experimental setup in an automated fashion. We showcase its use by thoroughly analyzing the well studied SAT solver spear on a benchmark of software verification instances and by empirically verifying two long-standing assumptions in algorithm configuration and parameter importance: (i) Parameter importance changes depending on the instance set at hand and (ii) Local and global parameter importance analysis do not necessarily agree with each other.", "title": "" }, { "docid": "cd33eed22ccfd1433d71017c4bb9e168", "text": "We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.", "title": "" }, { "docid": "c357b9646e31e2d881c0832983593516", "text": "The history of digital image compositing—other than simple digital implementation of known film art—is essentially the history of the alpha channel. Distinctions are drawn between digital printing and digital compositing, between matte creation and matte usage, and between (binary) masking and (subtle) matting. The history of the integral alpha channel and premultiplied alpha ideas are presented and their importance in the development of digital compositing in its current modern form is made clear. Basic Definitions Digital compositing is often confused with several related technologies. Here we distinguish compositing from printing and matte creation—eg, blue-screen matting. Printing v Compositing Digital film printing is the transfer, under digital computer control, of an image stored in digital form to standard chemical, analog movie film. It requires a sophisticated understanding of film characteristics, light source characteristics, precision film movements, film sizes, filter characteristics, precision scanning devices, and digital computer control. We had to solve all these for the Lucasfilm laser-based digital film printer—that happened to be a digital film input scanner too. My colleague David DiFrancesco was honored by the Academy of Motion Picture Art and Sciences last year with a technical award for his achievement on the scanning side at Lucasfilm (along with Gary Starkweather). Also honored was Gary Demos for his CRT-based digital film scanner (along with Dan Cameron). Digital printing is the generalization of this technology to other media, such as video and paper. Digital film compositing is the combining of two or more strips of film—in digital form—to create a resulting strip of film—in digital form—that is the composite of the components. For example, several spacecraft may have been filmed, one per film strip in its separate motion, and a starfield may have also been filmed. Then a digital film compositing step is performed to combine the separate spacecrafts over the starfield. The important point is that none of the technology mentioned above for digital film printing is involved in the digital compositing process. The separate spacecraft elements are digitally represented, and the starfield is digitally represented, so the composite is a strictly digital computation. Digital compositing is the generalization of this technology to other media. Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 2 This only means that the digital images being combined are represented in resolutions appropriate to their intended final output medium; the compositing techniques involved are the same regardless of output medium being, after all, digital computations. No knowledge of film characteristics, light sources characteristics, film movements, etc. is required for digital compositing. In short, the technology of digital film printing is completely separate from the technology of digital film compositing. The technology of digital film scanning is required, perhaps, to get the spacecrafts and starfield into digital form, and that of digital film printing is required to write the composite of these elements out to film, but the composite itself is a computation, not a physico-chemical process. This argument holds regardless of input or output media. In fact, from hereon I will refer to film as my example, it being clear that the argument generalizes to other media. Matte Creation v Matte Usage The general distinction drawn here is between the technology of pulling mattes, or matte creation, and that of compositing, or matte usage. To perform a film composite of, say a spacecraft, over, say a starfield, one must know where on an output film frame to write the foreground spacecraft and where to write the background starfield—that is, where to expose the foreground element to the unexposed film frame and where to expose the background element. We will ignore for the moment, for the purpose of clarity, the problem of partial transparencies of the foreground object that allow the background object to show through partially. In classic film technology, predating the computer by decades ([Beyer64], [Fielding72], [Vlahos80]), the required spatial information is provided by a (traveling) matte, another piece of film that is transparent where the spacecraft, for example, exists in the frame and opaque elsewhere. This can be done with monochrome film. It is also easy to generate the complement of this matte, sometimes called the holdout matte, by simply exposing the matte film strip to an unexposed strip of monochrome film. So the holdout matte film strip is placed up against the background film strip, in frame by frame register, called a bipack configuration of film, and exposed to a strip of unexposed color film. The starfield, for example, gets exposed to this receiving strip where the holdout matte does not hold out—that is, where the holdout matte is transparent. Then the same strip of film is re-exposed to a bipack consisting of the matte and the foreground element. This time the spacecraft, for example, gets exposed exactly where the starfield was not exposed. Digital film compositing technology is, in its simplest implementation, the digital version of this process, where each strip of film is replaced with a digital equivalent, and the composite is done with a digital computation. Once the foreground and background elements are in digital form and the matte is in digital form, then digital film compositing is a computation, not a physico-chemical process. As we shall see, the computer has caused several fundamentally new Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 3 ideas to be added to the compositor’s arsenal that are not simply simulations of known analog art. The question becomes: Where does the matte come from? There are several classic (pre-computer) answers to this question. One set of techniques (at least one of which, the sodium vapor technique, was invented by Petro Vlahos [Vlahos58]) causes the generation of the matte strip of film simultaneously with the foreground element strip of film. So this technique simultaneously generates two strips of film for each foreground element. Then optical techniques are used, as described above, to form the composite. Digital technology has nothing new to contribute here; it simply emulates the analog technique. Another technique called blue-screen matting provides the matte strip of film after the fact, so to speak. Blue-screen matting (or more generally, constant color matting, since blue is not required) was also invented by Petro Vlahos [Vlahos64]. It requires that a foreground element be filmed against a constant-color, often bright ultramarine blue, background. Then with a tricky set of optical and film techniques that don’t need to concern us here, a matte is generated that is transparent where the the foreground film strip is the special blue color and opaque elsewhere, or the complement of this. There are digital simulations of this technique that are complicated but involve nothing more than a digital computer to accomplish. The art of generating a matte when one is not provided is often called, in filmmaking circles, pulling a matte. It is an art, requiring experts to accomplish1. I will generalize this concept to all ways of producing a matte, and term it matte creation. The important point is that matte creation is a technology separate from that of compositing, which is a technology that assumes a matte already exists. In short, the technology of matte creation is completely separate from the technology of digital film compositing. Petro Vlahos has been awarded by the Academy of Motion Picture Arts and Sciences for his inventions of this technology, a lifetime achievement award in fact. The digital computer can be used to simulate what he has done and for relatively minor improvements. At Lucasfilm, my colleague Tom Porter and I implemented digital matte creation techniques and improved them, but do not consider this part of our compositing technology. It is part of our matte creation technology. It is time now to return to the discussion of transparency mentioned earlier. One of the hardest things to accomplish in matte creation technology is the representation of partial transparency in the matte. Transparencies are important for foreground elements such as glasses of water, windows, hair, halos, filmy clothes, motion blurred objects, etc. I will not go into the details of why this is difficult or how it is solved, because that is irrelevant to the arguments here. The important points are (1) partial transparency is fundamental to convincing com1 I have proved, in fact, in [Smith82b] that blue-screen matting is an underspecified problem in general and therefore requires a human in the loop. Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 4 posites, and (2) representing transparencies in a matte is part matte creation technology, not the compositing technology, which just uses the result.", "title": "" }, { "docid": "86ee2f9f92c6da2a21cd91c446b30ab3", "text": "Face detection (FD) is widely used in interactive user interfaces, in advertising industry, entertainment services, video coding, is necessary first stage for all face recognition systems, etc. However, the last practical and independent comparisons of FD algorithms were made by Hjelmas et al. and by Yang et al. in 2001. The aim of this work is to propose parameters of FD algorithms quality evaluation and methodology of their objective comparison, and to show the current state of the art in face detection. The main idea is routine test of the FD algorithm in the labeled image datasets. Faces are represented by coordinates of the centers of the eyes in these datasets. For algorithms, representing detected faces by rectangles, the statistical model of eyes’ coordinates estimation was proposed. In this work the seven face detection algorithms were tested; article contains the results of their comparison.", "title": "" }, { "docid": "60606403844df78f3d2a569813fdac96", "text": "Charge transport models developed for disordered organic semiconductors predict a non-Arrhenius temperature dependence ln(mu) proportional, variant1/T(2) for the mobility mu. We demonstrate that in space-charge limited diodes the hole mobility (micro(h)) of a large variety of organic semiconductors shows a universal Arrhenius temperature dependence micro(h)(T) = micro(0)exp(-Delta/kT) at low fields, due to the presence of extrinsic carriers from the Ohmic contact. The transport in a range of organic semiconductors, with a variation in room temperature mobility of more than 6 orders of magnitude, is characterized by a universal mobility micro(0) of 30-40 cm(2)/V s. As a result, we can predict the full temperature dependence of their charge transport properties with only the mobility at one temperature known.", "title": "" }, { "docid": "36d1cb90c0c94fab646ff90065b40258", "text": "This paper provides an in-depth view on nanosensor technology and electromagnetic communication among nanosensors. First, the state of the art in nanosensor technology is surveyed from the device perspective, by explaining the details of the architecture and components of individual nanosensors, as well as the existing manufacturing and integration techniques for nanosensor devices. Some interesting applications of wireless nanosensor networks are highlighted to emphasize the need for communication among nanosensor devices. A new network architecture for the interconnection of nanosensor deviceswith existing communicationnetworks is provided. The communication challenges in terms of terahertz channelmodeling, information encoding andprotocols for nanosensor networks are highlighted, defining a roadmap for the development of this new networking", "title": "" }, { "docid": "2072a27deaab8a7356d306c9aef07efc", "text": "In this paper, a closed-loop active IGBT gate drive providing highly dynamic diC/dt and dvCE/dt control is proposed. By means of using only simple passive measurement circuits for the generation of the feedback signals and a single operational amplifier as PI-controller, high analog control bandwidth is achieved enabling the application even for switching times in the sub-microsecond range. Therewith, contrary to state of the art gate drives, the parameter dependencies and nonlinearities of the IGBT are compensated enabling accurately specified and constant diC/dt and dvCE/dt values of the IGBT for the entire load and temperature range. This ensures the operation of an IGBT in the safe operating area (SOA), i.e. with limited turn-on peak reverse recovery current and turn-off overvoltage, and permits the restriction of electromagnetic interference (EMI). A hardware prototype is built to experimentally verify the proposed closed-loop active gate drive concept.", "title": "" }, { "docid": "8087288ed5fe59292db81d30c885c4ba", "text": "We present anew cluster scheduler,Graphene, aimed at jobs that have a complex dependency structure and heterogeneous resource demands. Relaxing either of these challenges, i.e., scheduling a DAG of homogeneous tasks or an independent set of heterogeneous tasks, leads to NP-hard problems. Reasonable heuristics exist for these simpler problems, but they perform poorly when scheduling heterogeneous DAGs. Our key insights are: (1) focus on the long-running tasks and thosewith toughto-pack resource demands, (2) compute a DAG schedule, oøine, by ûrst scheduling such troublesome tasks and then scheduling the remaining tasks without violating dependencies. _ese oøine schedules are distilled to a simple precedence order and are enforced by an online component that scales to many jobs. _e online component also uses heuristics to compactly pack tasks and to trade-oò fairness for faster job completion. Evaluation on a 200-server cluster and using traces of productionDAGs at Microso , shows that Graphene improves median job completion time by 25% and cluster throughput by 30%.", "title": "" }, { "docid": "1804ba10a62f81302f2701cfe0330783", "text": "We describe a web browser fingerprinting technique based on measuring the onscreen dimensions of font glyphs. Font rendering in web browsers is affected by many factors—browser version, what fonts are installed, and hinting and antialiasing settings, to name a few— that are sources of fingerprintable variation in end-user systems. We show that even the relatively crude tool of measuring glyph bounding boxes can yield a strong fingerprint, and is a threat to users’ privacy. Through a user experiment involving over 1,000 web browsers and an exhaustive survey of the allocated space of Unicode, we find that font metrics are more diverse than User-Agent strings, uniquely identifying 34% of participants, and putting others into smaller anonymity sets. Fingerprinting is easy and takes only milliseconds. We show that of the over 125,000 code points examined, it suffices to test only 43 in order to account for all the variation seen in our experiment. Font metrics, being orthogonal to many other fingerprinting techniques, can augment and sharpen those other techniques. We seek ways for privacy-oriented web browsers to reduce the effectiveness of font metric–based fingerprinting, without unduly harming usability. As part of the same user experiment of 1,000 web browsers, we find that whitelisting a set of standard font files has the potential to more than quadruple the size of anonymity sets on average, and reduce the fraction of users with a unique font fingerprint below 10%. We discuss other potential countermeasures.", "title": "" }, { "docid": "6ed8f357b6eb0d48d7f396e74a52a11a", "text": "In many applications, Unmanned Aerial Vehicles (UAVs) provide an indispensable platform for gathering information about the situation on the ground. However, to maximise information gained about the environment, such platforms require increased autonomy to coordinate the actions of multiple UAVs. This has led to the development of flight planning and coordination algorithms designed to maximise information gain during sensing missions. However, these have so far neglected the need to maintain wireless network connectivity. In this paper, we address this limitation by enhancing an existing multi-UAV planning algorithm with two new features that together make a significant contribution to the state-of-the-art: (1) we incorporate an on-line learning procedure that enables UAVs to adapt to the radio propagation characteristics of their environment, and (2) we integrate flight path and network routing decisions, so that modelling uncertainty and the affect of UAV position on network performance is taken into account.", "title": "" }, { "docid": "98729fc6a6b95222e6a6a12aa9a7ded7", "text": "What good is self-control? We incorporated a new measure of individual differences in self-control into two large investigations of a broad spectrum of behaviors. The new scale showed good internal consistency and retest reliability. Higher scores on self-control correlated with a higher grade point average, better adjustment (fewer reports of psychopathology, higher self-esteem), less binge eating and alcohol abuse, better relationships and interpersonal skills, secure attachment, and more optimal emotional responses. Tests for curvilinearity failed to indicate any drawbacks of so-called overcontrol, and the positive effects remained after controlling for social desirability. Low self-control is thus a significant risk factor for a broad range of personal and interpersonal problems.", "title": "" }, { "docid": "d01b8d59f5e710bcf75978d1f7dcdfa3", "text": "Over the last few decades, the use of electroencephalography (EEG) signals for motor imagery based brain-computer interface (MI-BCI) has gained widespread attention. Deep learning have also gained widespread attention and used in various application such as natural language processing, computer vision and speech processing. However, deep learning has been rarely used for MI EEG signal classification. In this paper, we present a deep learning approach for classification of MI-BCI that uses adaptive method to determine the threshold. The widely used common spatial pattern (CSP) method is used to extract the variance based CSP features, which is then fed to the deep neural network for classification. Use of deep neural network (DNN) has been extensively explored for MI-BCI classification and the best framework obtained is presented. The effectiveness of the proposed framework has been evaluated using dataset IVa of the BCI Competition III. It is found that the proposed framework outperforms all other competing methods in terms of reducing the maximum error. The framework can be used for developing BCI systems using wearable devices as it is computationally less expensive and more reliable compared to the best competing methods.", "title": "" }, { "docid": "2938a4977b570228644cdabaefece5e7", "text": "We describe a very simple framework for deriving the most-well known optimization problems in Active Appearance Models (AAMs), and most importantly for providing efficient solutions. Our formulation results in two optimization problems for fast and exact AAM fitting, and one new algorithm which has the important advantage of being applicable to 3D. We show that the dominant cost for both forward and inverse algorithms is a few times mN which is the cost of projecting an image onto the appearance subspace. This makes both algorithms not only computationally realizable but also very attractive speed-wise for most current systems. Because exact AAM fitting is no longer computationally prohibitive, we trained AAMs in-the-wild with the goal of investigating whether AAMs benefit from such a training process. Our results show that although we did not use sophisticated shape priors, robust features or robust norms for improving performance, AAMs perform notably well and in some cases comparably with current state-of-the-art methods. We provide Matlab source code for training, fitting and reproducing the results presented in this paper at http://ibug.doc.ic.ac.uk/resources.", "title": "" }, { "docid": "6033f644fb18ce848922a51d3b0000ab", "text": "This paper tests two of the simplest and most popular trading rules moving average and trading range break, by utilitizing a very long data series, the Dow Jones index from 1897 to 1986. Standard statistical analysis is extended through the use .of bootstrap techniques. Overall our results provide strong support for the technical strategies that are explored. The returns obtained from buy (sell) signals are not consistent with the three popular null models: the random walk, the AR(I) and the GARCH-M. Consistently, buy signals generate higher returns than sell signals. Moreover, returns following sell signals are negative which is not easily explained by any of the currently existing equilibrium models. Furthermore the returns following buy signals are less volatile than returns following sell signals. The term, \"technical analysis,\" is a general heading for a myriad of trading techniques. Technical analysts attempt to forecast prices by the study of past prices and a few other related summary statistics about security trading. They believe that shifts in supply and demand can be detected in charts of market action. Technical analysis is considered by many to be the original form of investment analysis, dating back to the 1800's. It came into widespread use before the period of extensive and fully disclosed financial information, which in turn enabled the practice of fnndamental analysis to develop. In the U.S., the use of trading rules to detect patterns in stock prices is probably as old as the stock market itself. The oldest technique is attributed to Charles Dow and is traced to the late 1800's. Many of the techniques used today have been utilized for over 60 years. These techniques for discovering hidden relations in stock returns can range from extremely simple to quite elaborate. The attitude of academics towards technical analysis, until recently, is well described by Malkiel(1981): \"Obviously, I am biased against the chartist. This is not only a personal predilection, but a professional one as well. Technical analysis is anathema to, the academic world. We love to pick onit. Our bullying tactics' are prompted by two considerations: (1) the method is patently false; and (2) it's easy to pick on. And while it may seem a bit unfair to pick on such a sorry target, just remember': His your money we are trying to save.\" , Nonetheless, technical analysis has been enjoying a renaissance on Wall Street. All major brokerage firms publish technical commentary on the market and individual securities\" and many of the newsletters published by various \"experts\" are based on technical analysis. In recent years the efficient market hypothesis has come under serious siege. Various papers suggested that stock returns are not fully explained by common risk measures. A significant relationship between expected return and fundamental variables such as price-earnings ratio, market-to, book ratio and size was documented. Another group ofpapers has uncovered systematic patterns in stock returns related to various calendar periods such as the weekend effect, the tnrn-of-the-month effect, the holiday effect and the, January effect. A line of research directly related to this work provides evidence of predictability of equity returns from past returns. De Bandt and Thaler(1985), Fama and French(1986), and Poterba and Summers(1988) find negative serial correlation in returns of individual stocks aid various portfolios over three to ten year intervals. Rosenberg, Reid, and Lanstein(1985) provide evidence for the presence of predictable return reversals on a monthly basis", "title": "" }, { "docid": "4aaab0aa476c60b2486fc76d63f7d899", "text": "When evaluating a potential product purchase, customers may have many questions in mind. They want to get adequate information to determine whether the product of interest is worth their money. In this paper we present a simple deep learning model for answering questions regarding product facts and specifications. Given a question and a product specification, the model outputs a score indicating their relevance. To train and evaluate our proposed model, we collected a dataset of 7,119 questions that are related to 153 different products. Experimental results demonstrate that — despite its simplicity — the performance of our model is shown to be comparable to a more complex state-of-the-art baseline.", "title": "" }, { "docid": "a68cec6fd069499099c8bca264eb0982", "text": "The anti-saccade task has emerged as an important task for investigating the flexible control that we have over behaviour. In this task, participants must suppress the reflexive urge to look at a visual target that appears suddenly in the peripheral visual field and must instead look away from the target in the opposite direction. A crucial step involved in performing this task is the top-down inhibition of a reflexive, automatic saccade. Here, we describe recent neurophysiological evidence demonstrating the presence of this inhibitory function in single-cell activity in the frontal eye fields and superior colliculus. Patients diagnosed with various neurological and/or psychiatric disorders that affect the frontal lobes or basal ganglia find it difficult to suppress the automatic pro-saccade, revealing a deficit in top-down inhibition.", "title": "" } ]
scidocsrr
68d51b682ae7827bb791c57655b8e741
Development of anthropomorphic robot hand with dual-mode twisting actuation and electromagnetic joint locking mechanism
[ { "docid": "869e25cddac2d14f58e4ab669fb2eae0", "text": "Actuation and control of robotic hands is one of the difficult problems in mechatronics that still needs to be solved. In this paper we present a kinematic analysis of a robotic finger that is actuated by twisted strings mechanism, which we named Twist Drive. The principle of actuation is briefly described in the paper and a structure of the developed robotic finger is presented. Forward kinematics for a finger with nonlinearly coupled joints is given. The obtained Jacobian is used in position control and in force control on the finger's tip. Experimental results are presented. Force control in two directions is successfully demonstrated.", "title": "" }, { "docid": "a15c6d2f8905f66b23468c5c00009bf3", "text": "This paper proposes a biomechatronic approach to the design of an anthropomorphic artificial hand able to mimic the natural motion of the human fingers. The hand is conceived to be applied to prosthetics as well as to humanoid and personal robotics; hence, anthropomorphism is a fundamental requirement to be addressed both in the physical aspect and in the functional behavior. In this paper, a biomechatronic approach is addressed to harmonize the mechanical design of the anthropomorphic artificial hand with the design of the hand control system. More in detail, this paper focuses on the control system of the hand and on the optimization of the hand design in order to obtain a human-like kinematics and dynamics. By evaluating the simulated hand performance, the mechanical design is iteratively refined. The mechanical structure and the ratio between number of actuators and number of degrees of freedom (DOFs) have been optimized in order to cope with the strict size and weight constraints that are typical of application of artificial hands to prosthetics and humanoid robotics. The proposed hand has a kinematic structure similar to the natural hand featuring three articulated fingers (thumb, index, and middle finger with 3 DOF for each finger and 1 DOF for the abduction/adduction of the thumb) driven by four dc motors. A special underactuated transmission has been designed that allows keeping the number of motors as low as possible while achieving a self-adaptive grasp, as a result of the passive compliance of the distal DOF of the fingers. A proper hand control scheme has been designed and implemented for the study and optimization of hand motor performance in order to achieve a human-like motor behavior. To this aim, available data on motion of the human fingers are collected from the neuroscience literature in order to derive a reference input for the control. Simulation trials and computer-aided design (CAD) mechanical tools are used to obtain a finger model including its dynamics. Also the closed-loop control system is simulated in order to study the effect of iterative mechanical redesign and to define the final set of mechanical parameters for the hand optimization. Results of the experimental tests carried out for validating the model of the robotic finger, and details on the process of integrated refinement and optimization of the mechanical structure and of the hand motor control scheme are extensively reported in the paper.", "title": "" } ]
[ { "docid": "9c0d65ee42ccfaa291b576568bad59e0", "text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.", "title": "" }, { "docid": "5ba6ec8c7f9dc4d2b6c55a505ce394a7", "text": "We develop a data structure, the spatialized normal cone hierarchy, and apply it to interactive solutions for model silhouette extraction, local minimum distance computations, and area light source shadow umbra and penumbra boundary determination. The latter applications extend the domain of surface normal encapsulation from problems described by a point and a model to problems involving two models.", "title": "" }, { "docid": "c40f1282c12a9acee876d127dffbd733", "text": "Online markets pose a difficulty for evaluating products, particularly experience goods, such as used cars, that cannot be easily described online. This exacerbates product uncertainty, the buyer’s difficulty in evaluating product characteristics, and predicting how a product will perform in the future. However, the IS literature has focused on seller uncertainty and ignored product uncertainty. To address this void, this study conceptualizes product uncertainty and examines its effects and antecedents in online markets for used cars (eBay Motors).", "title": "" }, { "docid": "f31ec6460f0e938f8e43f5b9be055aaf", "text": "Many people have turned to technological tools to help them be physically active. To better understand how goal-setting, rewards, self-monitoring, and sharing can encourage physical activity, we designed a mobile phone application and deployed it in a four-week field study (n=23). Participants found it beneficial to have secondary and primary weekly goals and to receive non-judgmental reminders. However, participants had problems with some features that are commonly used in practice and suggested in the literature. For example, trophies and ribbons failed to motivate most participants, which raises questions about how such rewards should be designed. A feature to post updates to a subset of their Facebook NewsFeed created some benefits, but barriers remained for most participants.", "title": "" }, { "docid": "848f8efe11785c00e8e8af737d173d44", "text": "Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers’ habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.", "title": "" }, { "docid": "779ca56cf734a3b187095424c79ae554", "text": "Web crawlers are automated tools that browse the web to retrieve and analyze information. Although crawlers are useful tools that help users to find content on the web, they may also be malicious. Unfortunately, unauthorized (malicious) crawlers are increasingly becoming a threat for service providers because they typically collect information that attackers can abuse for spamming, phishing, or targeted attacks. In particular, social networking sites are frequent targets of malicious crawling, and there were recent cases of scraped data sold on the black market and used for blackmailing. In this paper, we introduce PUBCRAWL, a novel approach for the detection and containment of crawlers. Our detection is based on the observation that crawler traffic significantly differs from user traffic, even when many users are hidden behind a single proxy. Moreover, we present the first technique for crawler campaign attribution that discovers synchronized traffic coming from multiple hosts. Finally, we introduce a containment strategy that leverages our detection results to efficiently block crawlers while minimizing the impact on legitimate users. Our experimental results in a large, wellknown social networking site (receiving tens of millions of requests per day) demonstrate that PUBCRAWL can distinguish between crawlers and users with high accuracy. We have completed our technology transfer, and the social networking site is currently running PUBCRAWL in production.", "title": "" }, { "docid": "72c054c955a34fbac8e798665ece8f57", "text": "In this paper, we propose and empirically validate a suite of hotspot patterns: recurring architecture problems that occur in most complex systems and incur high maintenance costs. In particular, we introduce two novel hotspot patterns, Unstable Interface and Implicit Cross-module Dependency. These patterns are defined based on Baldwin and Clark's design rule theory, and detected by the combination of history and architecture information. Through our tool-supported evaluations, we show that these patterns not only identify the most error-prone and change-prone files, they also pinpoint specific architecture problems that may be the root causes of bug-proneness and change-proneness. Significantly, we show that 1) these structure-history integrated patterns contribute more to error- and change-proneness than other hotspot patterns, and 2) the more hotspot patterns a file is involved in, the more error- and change-prone it is. Finally, we report on an industrial case study to demonstrate the practicality of these hotspot patterns. The architect and developers confirmed that our hotspot detector discovered the majority of the architecture problems causing maintenance pain, and they have started to improve the system's maintainability by refactoring and fixing the identified architecture issues.", "title": "" }, { "docid": "0997c292d6518b17991ce95839d9cc78", "text": "A word's sentiment depends on the domain in which it is used. Computational social science research thus requires sentiment lexicons that are specific to the domains being studied. We combine domain-specific word embeddings with a label propagation framework to induce accurate domain-specific sentiment lexicons using small sets of seed words. We show that our approach achieves state-of-the-art performance on inducing sentiment lexicons from domain-specific corpora and that our purely corpus-based approach outperforms methods that rely on hand-curated resources (e.g., WordNet). Using our framework, we induce and release historical sentiment lexicons for 150 years of English and community-specific sentiment lexicons for 250 online communities from the social media forum Reddit. The historical lexicons we induce show that more than 5% of sentiment-bearing (non-neutral) English words completely switched polarity during the last 150 years, and the community-specific lexicons highlight how sentiment varies drastically between different communities.", "title": "" }, { "docid": "02322377d048f2469928a71290cf1566", "text": "In order to interact with human environments, humanoid robots require safe and compliant control which can be achieved through force-controlled joints. In this paper, full body step recovery control for robots with force-controlled joints is achieved by adding model-based feed-forward controls. Push Recovery Model Predictive Control (PR-MPC) is presented as a method for generating full-body step recovery motions after a large disturbance. Results are presented from experiments on the Sarcos Primus humanoid robot that uses hydraulic actuators instrumented with force feedback control.", "title": "" }, { "docid": "153233047bde15ed5947b42ab3017796", "text": "The paper demonstrates use of wireless sensor as input data source for fast real-time automatic control. Fast, underactuated and highly unstable inverse triple pendulum system is controlled on the basis of values measured on the pendulum arms and wirelessly transmitted to the controller. The presented solution working in 2.4 GHz band and based on Atmel SAM R21 SoC on the sensor side and Atmel AT86RF233 transceiver on the controller side delivers data from 2 IRC sensors and 3-axis gyroscope with latency as low as 350 us.", "title": "" }, { "docid": "2fa61482be37fd956e6eceb8e517411d", "text": "According to analysis reports on road accidents of recent years, it's renowned that the main cause of road accidents resulting in deaths, severe injuries and monetary losses, is due to a drowsy or a sleepy driver. Drowsy state may be caused by lack of sleep, medication, drugs or driving continuously for long time period. An increase rate of roadside accidents caused due to drowsiness during driving indicates a need of a system that detects such state of a driver and alerts him prior to the occurrence of any accident. During the recent years, many researchers have shown interest in drowsiness detection. Their approaches basically monitor either physiological or behavioral characteristics related to the driver or the measures related to the vehicle being used. A literature survey summarizing some of the recent techniques proposed in this area is provided. To deal with this problem we propose an eye blink monitoring algorithm that uses eye feature points to determine the open or closed state of the eye and activate an alarm if the driver is drowsy. Detailed experimental findings are also presented to highlight the strengths and weaknesses of our technique. An accuracy of 94% has been recorded for the proposed methodology.", "title": "" }, { "docid": "7f76ea505b74bb03b3cff1aa70e0836d", "text": "BACKGROUND\nVision plays a critical role in athletic performance; however, previous studies have demonstrated that a variety of simulated athletic sensorimotor tasks can be surprisingly resilient to retinal defocus (blurred vision). The purpose of the present study was to extend this work to determine the effect of retinal defocus on overall basketball free throw performance, as well as for the factors gender, refractive error and experience.\n\n\nMETHODS\nForty-four young adult participants of both genders were recruited. They had a range of refractive errors and basketball experience. Each performed 20 standard basketball free throws under five lens defocus conditions in a randomised manner: plano, +1.50 D, +3.00 D, +4.50 D and +10.00 D.\n\n\nRESULTS\nOverall, free throw performance was significantly reduced under the +10.00 D lens defocus condition only. Previous experience, but neither refractive error nor gender, yielded a statistically significant difference in performance.\n\n\nCONCLUSION\nConsistent with previous studies of complex sensorimotor tasks, basketball free throw performance was resilient to low and moderate levels of retinal defocus. Thus, for a relatively non-dynamic motor task at a fixed far distance, such as the basketball free throw, precise visual clarity was not critical. Other factors such as motor memory may be important. However, in the dynamic athletic competitive environment it is likely that visual clarity plays a more critical role in one's performance level, at least for specific task demands.", "title": "" }, { "docid": "d105cbc8151252a04388f30622513906", "text": "Heart disease causing cardiac cell death due to ischemia–reperfusion injury is a major cause of morbidity and mortality in the United States. Coronary heart disease and cardiomyopathies are the major cause for congestive heart failure, and thrombosis of the coronary arteries is the most common cause of myocardial infarction. Cardiac injury is followed by post-injury cardiac remodeling or fibrosis. Cardiac fibrosis is characterized by net accumulation of extracellular matrix proteins in the cardiac interstitium and results in both systolic and diastolic dysfunctions. It has been suggested by both experimental and clinical evidence that fibrotic changes in the heart are reversible. Hence, it is vital to understand the mechanism involved in the initiation, progression, and resolution of cardiac fibrosis to design anti-fibrotic treatment modalities. Animal models are of great importance for cardiovascular research studies. With the developing research field, the choice of selecting an animal model for the proposed research study is crucial for its outcome and translational purpose. Compared to large animal models for cardiac research, the mouse model is preferred by many investigators because of genetic manipulations and easier handling. This critical review is focused to provide insight to young researchers about the various mouse models, advantages and disadvantages, and their use in research pertaining to cardiac fibrosis and hypertrophy.", "title": "" }, { "docid": "291f3f95cf06f6ac3bda91178ee3ce1b", "text": "this paper discusses the several research methodologies that can be used in Computer Science (CS) and Information Systems (IS). The research methods vary according to the science domain and project field. However a little of research methodologies can be reasonable for CS and IS. KeywordsComputer Science(CS), Information Systems (IS),Research Methodologies.", "title": "" }, { "docid": "8b0870c8e975eeff8597eb342cd4f3f9", "text": "We propose a novel recursive partitioning method for identifying subgroups of subjects with enhanced treatment effects based on a differential effect search algorithm. The idea is to build a collection of subgroups by recursively partitioning a database into two subgroups at each parent group, such that the treatment effect within one of the two subgroups is maximized compared with the other subgroup. The process of data splitting continues until a predefined stopping condition has been satisfied. The method is similar to 'interaction tree' approaches that allow incorporation of a treatment-by-split interaction in the splitting criterion. However, unlike other tree-based methods, this method searches only within specific regions of the covariate space and generates multiple subgroups of potential interest. We develop this method and provide guidance on key topics of interest that include generating multiple promising subgroups using different splitting criteria, choosing optimal values of complexity parameters via cross-validation, and addressing Type I error rate inflation inherent in data mining applications using a resampling-based method. We evaluate the operating characteristics of the procedure using a simulation study and illustrate the method with a clinical trial example.", "title": "" }, { "docid": "ea23f29cfec340cacb686af9f9d6946c", "text": "The controllers of single-phase grid-tied inverters require improvements to enable distribution generation systems to meet the grid codes/standards with respect to power quality and the fault ride through capability. In that case, the response of the selected synchronization technique is crucial for the performance of the entire grid-tied inverter. In this paper, a new synchronization method with good dynamics and high accuracy under a highly distorted voltage is proposed. This method uses a Multiharmonic Decoupling Cell (MHDC), which thus can cancel out the oscillations on the synchronization signals due to the harmonic voltage distortion while maintaining the dynamic response of the synchronization. Therefore, the accurate and dynamic response of the proposed MHDC-PLL can be beneficial for the performance of the whole single-phase grid-tied inverter.", "title": "" }, { "docid": "f5e934d65fa436cdb8e5cfa81ea29028", "text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.", "title": "" }, { "docid": "33d4f2ccb5b228b08c82e55f136b98ba", "text": "As data volumes continue to rise, manual inspection is becoming increasingly untenable. In response, we present MacroBase, a data analytics engine that prioritizes end-user attention in high-volume fast data streams. MacroBase enables efficient, accurate, and modular analyses that highlight and aggregate important and unusual behavior, acting as a search engine for fast data. MacroBase is able to deliver order-of-magnitude speedups over alternatives by optimizing the combination of explanation and classification tasks and by leveraging a new reservoir sampler and heavy-hitters sketch specialized for fast data streams. As a result, MacroBase delivers accurate results at speeds of up to 2M events per second per query on a single core. The system has delivered meaningful results in production, including at a telematics company monitoring hundreds of thousands of vehicles.", "title": "" }, { "docid": "462a0746875e35116f669b16d851f360", "text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.", "title": "" }, { "docid": "286dd9575b4de418b0d2daf121306e62", "text": "Absfract—Impedance transforming networks are described which consist of short lengths of relatively high impedance transmission line alternating with short lengths of relatively low impedance line. The sections of transmission line are all exactly the same length (except for corrections for fringing capacitances), and the lengths of the line sections are typically short compared to a quarter wavelength throughout the operating band of the transformer. Tables of designs are presented which give exactly Chebyshev transmission characteristics between resistive terminations having ratios ranging from 1.5 to 10, and for fractional bandwidths ranging from 0.10 to 1.20. These impedance-transforming networks should have application where very compact transmission-line or dielectric-layer impedance transformers are desired.", "title": "" } ]
scidocsrr
da4e1a55fec7b1700beaef04a4b08246
A Modified FCM Algorithm for MRI Brain Image Segmentation
[ { "docid": "67aac8ddbd97ea2aeb56a954fcf099f3", "text": "Image segmentation is very essential and critical to image processing and pattern recognition. This survey provides a summary of color image segmentation techniques available now. Basically, color segmentation approaches are based on monochrome segmentation approaches operating in di!erent color spaces. Therefore, we \"rst discuss the major segmentation approaches for segmenting monochrome images: histogram thresholding, characteristic feature clustering, edge detection, region-based methods, fuzzy techniques, neural networks, etc.; then review some major color representation methods and their advantages/disadvantages; \"nally summarize the color image segmentation techniques using di!erent color representations. The usage of color models for image segmentation is also discussed. Some novel approaches such as fuzzy method and physics-based method are investigated as well. 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "548b9580c2b36bd1730392a92f6640c2", "text": "Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of magnetic resonance (MR) images. Unfortunately, MR images always contain a significant amount of noise caused by operator performance, equipment, and the environment, which can lead to serious inaccuracies with segmentation. A robust segmentation technique based on an extension to the traditional fuzzy c-means (FCM) clustering algorithm is proposed in this paper. A neighborhood attraction, which is dependent on the relative location and features of neighboring pixels, is shown to improve the segmentation performance dramatically. The degree of attraction is optimized by a neural-network model. Simulated and real brain MR images with different noise levels are segmented to demonstrate the superiority of the proposed technique compared to other FCM-based methods. This segmentation method is a key component of an MR image-based classification system for brain tumors, currently being developed.", "title": "" }, { "docid": "fd2b1d2a4d44f0535ceb6602869ffe1c", "text": "A conventional FCM algorithm does not fully utilize the spatial information in the image. In this paper, we present a fuzzy c-means (FCM) algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. The advantages of the new method are the following: (1) it yields regions more homogeneous than those of other methods, (2) it reduces the spurious blobs, (3) it removes noisy spots, and (4) it is less sensitive to noise than other techniques. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information.", "title": "" }, { "docid": "ad9f00a73306cba20073385c7482ba43", "text": "We present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm.", "title": "" } ]
[ { "docid": "c0d4538f34499d19f14c3adba8527280", "text": "OBJECTIVE\nTo consider the use of the diagnostic category 'complex posttraumatic stress disorder' (c-PTSD) as detailed in the forthcoming ICD-11 classification system as a less stigmatising, more clinically useful term, instead of the current DSM-5 defined condition of 'borderline personality disorder' (BPD).\n\n\nCONCLUSIONS\nTrauma, in its broadest definition, plays a key role in the development of both c-PTSD and BPD. Given this current lack of differentiation between these conditions, and the high stigma faced by people with BPD, it seems reasonable to consider using the diagnostic term 'complex posttraumatic stress disorder' to decrease stigma and provide a trauma-informed approach for BPD patients.", "title": "" }, { "docid": "8d4169969067679c9ad92e26d257ddf9", "text": "Video walls are useful to display large visualizations. The SAGE2 web-based system allows easy programming of scalable visualization applications. However, it is not easy to stream high resolution video to SAGE2 powered video walls. We proposed several methods of high resolution video streaming to LCD walls, evaluated their performance and discuss their scalability and properties.", "title": "" }, { "docid": "c07cb4fee98fd54b21f2f46b7384f171", "text": "This study was conducted to provide basic data as part of a project to distinguish naturally occurring organic acids from added preservatives. Accordingly, we investigated naturally occurring levels of sorbic, benzoic and propionic acids in fish and their processed commodities. The levels of sorbic, benzoic and propionic acids in 265 fish and their processed commodities were determined by high-performance liquid chromatography-photodiode detection array (HPLC-PDA) of sorbic and benzoic acids and gas chromatography-mass spectrometry (GC/MS) of propionic acid. For propionic acid, GC-MS was used because of its high sensitivity and selectivity in complicated matrix samples. Propionic acid was detected in 36.6% of fish samples and 50.4% of processed fish commodities. In contrast, benzoic acid was detected in 5.6% of fish samples, and sorbic acid was not detected in any sample. According to the Korean Food and Drug Administration (KFDA), fishery products and salted fish may only contain sorbic acid in amounts up to 2.0 g kg-1 and 1.0 g kg-1, respectively. The results of the monitoring in this study can be considered violations of KFDA regulations (total 124; benzoic acid 8, propionic acid 116). However, it is difficult to distinguish naturally generated organic acids and artificially added preservatives in fishery products. Therefore, further studies are needed to extend the database for distinction of naturally generated organic acids and added preservatives.", "title": "" }, { "docid": "d337cf524cf9c59149bb8e7eba6ef33a", "text": "Twelve years after the Kikwit Ebola outbreak in 1995, Ebola virus reemerged in the Occidental Kasaï province of the Democratic Republic of Congo (DRC) between May and November 2007, affecting more than 260 humans and causing 186 deaths. During this latter outbreak we conducted several epidemiological investigations to identify the underlying ecological conditions and animal sources. Qualitative social and environmental data were collected through interviews with villagers and by direct observation. The local populations reported no unusual morbidity or mortality among wild or domestic animals, but they described a massive annual fruit bat migration toward the southeast, up the Lulua River. Migrating bats settled in the outbreak area for several weeks, between April and May, nestling in the numerous fruit trees in Ndongo and Koumelele islands as well as in palm trees of a largely abandoned plantation. They were massively hunted by villagers, for whom they represented a major source of protein. By tracing back the initial human-human transmission events, we were able to show that, in May, the putative first human victim bought freshly killed bats from hunters to eat. We were able to reconstruct the likely initial human-human transmission events that preceded the outbreak. This study provides the most likely sequence of events linking a human Ebola outbreak to exposure to fruit bats, a putative virus reservoir. These findings support the suspected role of bats in the natural cycle of Ebola virus and indicate that the massive seasonal fruit bat migrations should be taken into account in operational Ebola risk maps and seasonal alerts in the DRC.", "title": "" }, { "docid": "dd5fa68b788cc0816c4e16f763711560", "text": "Over the last ten years the basic knowledge of brain structure and function has vastly expanded, and its incorporation into the developmental sciences is now allowing for more complex and heuristic models of human infancy. In a continuation of this effort, in this two-part work I integrate current interdisciplinary data from attachment studies on dyadic affective communications, neuroscience on the early developing right brain, psychophysiology on stress systems, and psychiatry on psychopathogenesis to provide a deeper understanding of the psychoneurobiological mechanisms that underlie infant mental health. In this article I detail the neurobiology of a secure attachment, an exemplar of adaptive infant mental health, and focus upon the primary caregiver’s psychobiological regulation of the infant’s maturing limbic system, the brain areas specialized for adapting to a rapidly changing environment. The infant’s early developing right hemisphere has deep connections into the limbic and autonomic nervous systems and is dominant for the human stress response, and in this manner the attachment relationship facilitates the expansion of the child’s coping capcities. This model suggests that adaptive infant mental health can be fundamentally defined as the earliest expression of flexible strategies for coping with the novelty and stress that is inherent in human interactions. This efficient right brain function is a resilience factor for optimal development over the later stages of the life cycle. RESUMEN: En los últimos diez an ̃os el conocimiento ba ́sico de la estructura y funcio ́n del cerebro se ha expandido considerablemente, y su incorporacio ́n mo parte de las ciencias del desarrollo permite ahora tener modelos de infancia humana ma ́s complejos y heurı ́sticos. Como una continuacio ́n a este esfuerzo, en este ensayo que contiene dos partes, se integra la actual informacio ́n interdisciplinaria que proviene de los estudios de la unio ́n afectiva en relacio ́n con comunicaciones afectivas en forma de dı ́adas, la neurociencia en el desarrollo inicial del lado derecho del cerebro, la sicofisiologı ́a de los sistemas de tensión emocional, ası ́ como la siquiatrı ́a en cuanto a la sicopatoge ́nesis, con el fin de presentar un conocimiento ma ́s profundo de los mecanismos siconeurobiolo ́gic s que sirven de base para la salud mental infantil. En este ensayo se explica con detalle la neurobiologı ́a de una relacio ́n afectiva segura, un modelo de salud mental infantil que se puede adaptar, y el enfoque del mismo se centra en la reglamentacio ́n sicobiológica que quien primariamente cuida del nin ̃o tiene del maduramiento del sistema lı́mbico del infante, o sea, las a ́reas del cerebro especialmente dedicadas a la adaptacio ́n un medio Direct correspondence to: Allan N. Schore, Department of Psychiatry and Biobehavioral Sciences, UCLA School of Medicine, 9817 Sylvia Avenue, Northridge, CA 91324; fax: (818) 349-4404; e-mail: anschore@aol.com. 8 ● A.N. Schore IMHJ (Wiley) LEFT INTERACTIVE", "title": "" }, { "docid": "c60957f1bf90450eb947d2b0ab346ffb", "text": "Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.", "title": "" }, { "docid": "6b5a7e58a8407fa5cda402d4996a3a10", "text": "In the last few years, Hadoop become a \"de facto\" standard to process large scale data as an open source distributed system. With combination of data mining techniques, Hadoop improve data analysis utility. That is why, there are amount of research is studied to apply data mining technique to mapreduce framework in Hadoop. However, data mining have a possibility to cause a privacy violation and this threat is a huge obstacle for data mining using Hadoop. To solve this problem, numerous studies have been conducted. However, existing studies were insufficient and had several drawbacks. In this paper, we propose the privacy preserving data mining technique in Hadoop that is solve privacy violation without utility degradation. We focus on association rule mining algorithm that is representative data mining algorithm. We validate the proposed technique to satisfy performance and preserve data privacy through the experimental results.", "title": "" }, { "docid": "509d77cef3f9ded37f75b0b1a1314e81", "text": "Object class detection has been a synonym for 2D bounding box localization for the longest time, fueled by the success of powerful statistical learning techniques, combined with robust image representations. Only recently, there has been a growing interest in revisiting the promise of computer vision from the early days: to precisely delineate the contents of a visual scene, object by object, in 3D. In this paper, we draw from recent advances in object detection and 2D-3D object lifting in order to design an object class detector that is particularly tailored towards 3D object class detection. Our 3D object class detection method consists of several stages gradually enriching the object detection output with object viewpoint, keypoints and 3D shape estimates. Following careful design, in each stage it constantly improves the performance and achieves state-of-the-art performance in simultaneous 2D bounding box and viewpoint estimation on the challenging Pascal3D+ [50] dataset.", "title": "" }, { "docid": "7f3d821ae9555caf4dbb4493445400b7", "text": "Sampling-based algorithms in the mould of RANSAC have emerged as one of the most successful methods for the fully automated registration of point clouds acquired by terrestrial laser scanning (TLS). Sampling methods in conjunction with 3D keypoint extraction, have shown promising results, e.g. the recent K-4PCS (Theiler et al., 2013). However, they still exhibit certain improbable failures, and are computationally expensive and slow if the overlap between scans is low. Here, we examine several variations of the basic K-4PCS framework that have the potential to improve its runtime and robustness. Since the method is inherently parallelizable, straight-forward multi-threading already brings down runtimes to a practically acceptable level (seconds to minutes). At a conceptual level, replacing the RANSAC error function with the more principled MSAC function (Torr and Zisserman, 2000) and introducing a minimum-distance prior to counter the near-field bias reduce failure rates by a factor of up to 4. On the other hand, replacing the repeated evaluation of the RANSAC error function with a voting scheme over the transformation parameters proved not to be generally applicable for the scan registration problem. All these possible extensions are tested experimentally on multiple challenging outdoor and indoor scenarios.", "title": "" }, { "docid": "b42c9db51f55299545588a1ee3f7102f", "text": "With the increasing development of Web 2.0, such as social media and online businesses, the need for perception of opinions, attitudes, and emotions grows rapidly. Sentiment analysis, the topic studying such subjective feelings expressed in text, has attracted significant attention from both the research community and industry. Although we have known sentiment analysis as a task of mining opinions expressed in text and analyzing the entailed sentiments and emotions, so far the task is still vaguely defined in the research literature because it involves many overlapping concepts and sub-tasks. Because this is an important area of scientific research, the field needs to clear this vagueness and define various directions and aspects in detail, especially for students, scholars, and developers new to the field. In fact, the field includes numerous natural language processing tasks with different aims (such as sentiment classification, opinion information extraction, opinion summarization, sentiment retrieval, etc.) and these have multiple solution paths. Bing Liu has done a great job in this book in providing a thorough exploration and an anatomy of the sentiment analysis problem and conveyed a wealth of knowledge about different aspects of the field.", "title": "" }, { "docid": "16fa2f02d0709c130cc35fce61793ae1", "text": "Evaluating similarity between graphs is of major importance in several computer vision and pattern recognition problems, where graph representations are often used to model objects or interactions between elements. The choice of a distance or similarity metric is, however, not trivial and can be highly dependent on the application at hand. In this work, we propose a novel metric learning method to evaluate distance between graphs that leverages the power of convolutional neural networks, while exploiting concepts from spectral graph theory to allow these operations on irregular graphs. We demonstrate the potential of our method in the field of connectomics, where neuronal pathways or functional connections between brain regions are commonly modelled as graphs. In this problem, the definition of an appropriate graph similarity function is critical to unveil patterns of disruptions associated with certain brain disorders. Experimental results on the ABIDE dataset show that our method can learn a graph similarity metric tailored for a clinical application, improving the performance of a simple k-nn classifier by 11.9% compared to a traditional distance metric.", "title": "" }, { "docid": "3300e4e29d160fb28861ac58740834b5", "text": "To facilitate proactive fault management in large-scale systems such as IBM Blue Gene/P, online failure prediction is of paramount importance. While many techniques have been presented for online failure prediction, questions arise regarding two commonly used approaches: period-based and event-driven. Which one has better accuracy? What is the best observation window (i.e., the time interval used to collect evidence before making a prediction)? How does the lead time (i.e., the time interval from the prediction to the failure occurrence) impact prediction arruracy? To answer these questions, we analyze and compare period-based and event-driven prediction approaches via a Bayesian prediction model. We evaluate these prediction approaches, under a variety of testing parameters, by means of RAS logs collected from a production supercomputer at Argonne National Laboratory. Experimental results show that the period-based Bayesian model and the event-driven Bayesian model can achieve up to 65.0% and 83.8% prediction accuracy, respectively. Furthermore, our sensitivity study indicates that the event-driven approach seems more suitable for proactive fault management in large-scale systems like Blue Gene/P.", "title": "" }, { "docid": "bee4bd3019983dc7f66cfd3dafc251ac", "text": "We present a framework to systematically analyze convolutional neural networks (CNNs) used in classification of cars in autonomous vehicles. Our analysis procedure comprises an image generator that produces synthetic pictures by sampling in a lower dimension image modification subspace and a suite of visualization tools. The image generator produces images which can be used to test the CNN and hence expose its vulnerabilities. The presented framework can be used to extract insights of the CNN classifier, compare across classification models, or generate training and validation datasets.", "title": "" }, { "docid": "de14bea572cc164a2f15a05907de481a", "text": "At present, artificial intelligence (AI) has made considerable progress in recognition of speech, face, and emotion. Potential application to robots could bring significant improvement on intelligent robotic systems. However, limited resource on robots cannot satisfy the large-scale computation and storage that the AI recognition requires. Cloud provides an efficient way for robots, where they off-load the computation too. Therefore, we present a cognition-based context-aware cloud computing framework, which is designed to help robot’s sense environments including user’s emotions. Based on the recognized context information, robots could optimize their responses and improve the user’s experience on interaction. The framework contains a customizable context monitoring system on the mobile end to collect and process the data from the robot’s sensors. Besides, it integrates various AI recognition services in the cloud to extract the context facts by analyzing and understanding the data. Once the context data is extracted, the results are pushed back to mobile end for making a better decision in the next interactions. In this paper, we demonstrate and evaluate the framework by a real case, an educational mobile app for English learning. The results show that the proposed framework could significantly improve the interaction and intelligence of mobile robots.", "title": "" }, { "docid": "47f2a5a61677330fc85ff6ac700ac39f", "text": "We present CHALET, a 3D house simulator with support for navigation and manipulation. CHALET includes 58 rooms and 10 house configuration, and allows to easily create new house and room layouts. CHALET supports a range of common household activities, including moving objects, toggling appliances, and placing objects inside closeable containers. The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment.", "title": "" }, { "docid": "7242400e9d0043b74e5baa931ccb83ed", "text": "The Wikipedia is a collaborative encyclopedia: anyone can contribute to its articles simply by clicking on an \"edit\" button. The open nature of the Wikipedia has been key to its success, but has also created a challenge: how can readers develop an informed opinion on its reliability? We propose a system that computes quantitative values of trust for the text in Wikipedia articles; these trust values provide an indication of text reliability.\n The system uses as input the revision history of each article, as well as information about the reputation of the contributing authors, as provided by a reputation system. The trust of a word in an article is computed on the basis of the reputation of the original author of the word, as well as the reputation of all authors who edited text near the word. The algorithm computes word trust values that vary smoothly across the text; the trust values can be visualized using varying text-background colors. The algorithm ensures that all changes to an article's text are reflected in the trust values, preventing surreptitious content changes.\n We have implemented the proposed system, and we have used it to compute and display the trust of the text of thousands of articles of the English Wikipedia. To validate our trust-computation algorithms, we show that text labeled as low-trust has a significantly higher probability of being edited in the future than text labeled as high-trust.", "title": "" }, { "docid": "88bc4f8a24a2e81a9c133d11a048ca10", "text": "In this paper, we give an overview of the HDF5 technology suite and some of its applications. We discuss the HDF5 data model, the HDF5 software architecture and some of its performance enhancing capabilities.", "title": "" }, { "docid": "0eed7e3a9128b10f8c4711592b9628ee", "text": "Visual defects, called mura in the field, sometimes occur during the manufacturing of the flat panel liquid crystal displays. In this paper we propose an automatic inspection method that reliably detects and quantifies TFT-LCD regionmura defects. The method consists of two phases. In the first phase we segment candidate region-muras from TFT-LCD panel images using the modified regression diagnostics and Niblack’s thresholding. In the second phase, based on the human eye’s sensitivity to mura, we quantify mura level for each candidate, which is used to identify real muras by grading them as pass or fail. Performance of the proposed method is evaluated on real TFT-LCD panel samples. key words: Machine vision, image segmentation, regression diagnostics, industrial inspection, visual perception.", "title": "" }, { "docid": "bc6a13cc44a77d29360d04a2bc96bd61", "text": "Security competitions have become a popular way to foster security education by creating a competitive environment in which participants go beyond the effort usually required in traditional security courses. Live security competitions (also called “Capture The Flag,” or CTF competitions) are particularly well-suited to support handson experience, as they usually have both an attack and a defense component. Unfortunately, because these competitions put several (possibly many) teams against one another, they are difficult to design, implement, and run. This paper presents a framework that is based on the lessons learned in running, for more than 10 years, the largest educational CTF in the world, called iCTF. The framework’s goal is to provide educational institutions and other organizations with the ability to run customizable CTF competitions. The framework is open and leverages the security community for the creation of a corpus of educational security challenges.", "title": "" }, { "docid": "ef8d26a05cf1a9df724437b684b85dca", "text": "Speech technology and systems in human computer interaction have witnessed a steady and important advancement over last two decades. Today, speech technologies are commercially available for boundless but interesting range of tasks. These technologies permit machines to respond correctly and consistently to human voices, and provide useful and valuable services. In the present era, mainly Hidden Markov Model (HMMs) based speech recognizers are used. This paper aims to present a speech recognition system using Hidden Markov Model. Hidden Markov Model Toolkit (HTK) is used to develop the system. It is used to recognize the isolated words using acoustic word model.", "title": "" } ]
scidocsrr
be1251672e2ef44c457d70a7d89cb546
Understanding MOOC students: motivations and behaviours indicative of MOOC completion
[ { "docid": "a7eff25c60f759f15b41c85ac5e3624f", "text": "Connectivist massive open online courses (cMOOCs) represent an important new pedagogical approach ideally suited to the network age. However, little is known about how the learning experience afforded by cMOOCs is suited to learners with different skills, motivations, and dispositions. In this study, semi-structured interviews were conducted with 29 participants on the Change11 cMOOC. These accounts were analyzed to determine patterns of engagement and factors affecting engagement in the course. Three distinct types of engagement were recognized – active participation, passive participation, and lurking. In addition, a number of key factors that mediated engagement were identified including confidence, prior experience, and motivation. This study adds to the overall understanding of learning in cMOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how the learning experience afforded by cMOOCs suits the diverse range of learners that may coexist within a cMOOC. These insights can be used by designers of future cMOOCs to tailor the learning experience to suit the diverse range of learners that may choose to learn in this way.", "title": "" } ]
[ { "docid": "b80ab14d0908a2a66a4c5a020860a6ac", "text": "We evaluate U.S. firms’ leverage determinants by studying how 1,801 firms paid for 2,073 very large investments during the period 1989-2006. This approach complements existing empirical work on capital structure, which typically estimates regression models for a broad set of CRSP/Compustat firms. If firms making large investments generally raise new external funds, their securities issuances should provide information about managers’ attitudes toward leverage. Our data indicate that large investments are mostly externally financed and that firms issue securities that tend to move them quite substantially toward target debt ratios. Firms also tend to issue more equity following a share price runup or when the market-to-book ratio is high. We find little support for the standard pecking order hypothesis.", "title": "" }, { "docid": "e53c7f8890d3bf49272e08d4446703a4", "text": "In orthogonal frequency-division multiplexing (OFDM) systems, it is generally assumed that the channel response is static in an OFDM symbol period. However, the assumption does not hold in high-mobility environments. As a result, intercarrier interference (ICI) is induced, and system performance is degraded. A simple remedy for this problem is the application of the zero-forcing (ZF) equalizer. Unfortunately, the direct ZF method requires the inversion of an N times N ICI matrix, where N is the number of subcarriers. When N is large, the computational complexity can become prohibitively high. In this paper, we first propose a low-complexity ZF method to solve the problem in single-input-single-output (SISO) OFDM systems. The main idea is to explore the special structure inherent in the ICI matrix and apply Newton's iteration for matrix inversion. With our formulation, fast Fourier transforms (FFTs) can be used in the iterative process, reducing the complexity from O (N3) to O (N log2 N). Another feature of the proposed algorithm is that it can converge very fast, typically in one or two iterations. We also analyze the convergence behavior of the proposed method and derive the theoretical output signal-to-interference-plus-noise ratio (SINR). For a multiple-input-multiple-output (MIMO) OFDM system, the complexity of the ZF method becomes more intractable. We then extend the method proposed for SISO-OFDM systems to MIMO-OFDM systems. It can be shown that the computational complexity can be reduced even more significantly. Simulations show that the proposed methods perform almost as well as the direct ZF method, while the required computational complexity is reduced dramatically.", "title": "" }, { "docid": "0cc16f8fe35cbf169de8263236d08166", "text": "In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over GF (2) on sensor motes using small word size is not appropriate because XOR multiplication over GF (2) is not efficiently supported by current low-powered microprocessors. Although there are some implementations over GF (2) on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over GF (2) are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over GF (2) can be decreased by 21.1% and 24.7%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around 15% ∼ 19%. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve – a kind of TinyOS package supporting elliptic curve operations) which is the fastest ECC implementation over GF (2) on 8-bit sensor motes using ATmega128L as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over GF (2) can be faster than that over GF (p) on 8-bit ATmega128L processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over GF (p). TinyECCK with sect163k1 can compute a scalar multiplication within 1.14 secs on a MICAz mote at the expense of 5,592-byte of ROM and 618-byte of RAM. Furthermore, it can also generate a signature and verify it in 1.37 and 2.32 secs with 13,748-byte of ROM and 1,004-byte of RAM. 2 Seog Chung Seo et al.", "title": "" }, { "docid": "a0c1f145f423052b6e8059c5849d3e34", "text": "Improved methods of assessment and research design have established a robust and causal association between stressful life events and major depressive episodes. The chapter reviews these developments briefly and attempts to identify gaps in the field and new directions in recent research. There are notable shortcomings in several important topics: measurement and evaluation of chronic stress and depression; exploration of potentially different processes of stress and depression associated with first-onset versus recurrent episodes; possible gender differences in exposure and reactivity to stressors; testing kindling/sensitization processes; longitudinal tests of diathesis-stress models; and understanding biological stress processes associated with naturally occurring stress and depressive outcomes. There is growing interest in moving away from unidirectional models of the stress-depression association, toward recognition of the effects of contexts and personal characteristics on the occurrence of stressors, and on the likelihood of progressive and dynamic relationships between stress and depression over time-including effects of childhood and lifetime stress exposure on later reactivity to stress.", "title": "" }, { "docid": "f0eb42b522eadddaff7ebf479f791193", "text": "High-density and low-leakage 1W1R 2-port (2P) SRAM is realized by 6T 1-port SRAM bitcell with double pumping internal clock in 16 nm FinFET technology. Proposed clock generator with address latch circuit enables robust timing design without sever setup/hold margin. We designed a 256 kb 1W1R 2P SRAM macro which achieves the highest density of 6.05 Mb/mm2. Measured data shows that a 313 ps of read-access-time is observed at 0.8 V. Standby leakage power in resume standby (RS) mode is reduced by 79% compared to the conventional dual-port SRAM without RS.", "title": "" }, { "docid": "fddbcbdb0de1c7d49fe5545f3ab1bdfa", "text": "Photovoltaic Systems (PVS) can be easily integrated in residential buildings hence they will be the main responsible of making low-voltage grid power flow bidirectional. Control issues on both the PV side and on the grid side have received much attention from manufacturers, competing for efficiency and low distortion and academia proposing new ideas soon become state-of-the-art. This paper aims at reviewing part of these topics (MPPT, current and voltage control) leaving to a future paper to complete the scenario. Implementation issues on Digital Signal Processor (DSP), the mandatory choice in this market segment, are discussed.", "title": "" }, { "docid": "2639f5d735abed38ed4f7ebf11072087", "text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.", "title": "" }, { "docid": "c10ac9c3117627b2abb87e268f5de6b1", "text": "Now days, the number of crime over children is increasing day by day. the implementation of School Security System(SSS) via RFID to avoid crime, illegal activates by students and reduce worries among parents. The project is the combination of latest Technology using RFID, GPS/GSM, image processing, WSN and web based development using Php,VB.net language apache web server and SQL. By using RFID technology it is easy track the student thus enhances the security and safety in selected zone. The information about student such as in time and out time from Bus and campus will be recorded to web based system and the GPS/GSM system automatically sends information (SMS / Phone Call) toothier parents. That the student arrived to Bus/Campus safely.", "title": "" }, { "docid": "0a80057b2c43648e668809e185a68fe6", "text": "A seminar that surveys state-of-the-art microprocessors offers an excellent forum for students to see how computer architecture techniques are employed in practice and for them to gain a detailed knowledge of the state of the art in microprocessor design. Princeton and the University of Virginia have developed such a seminar, organized around student presentations and a substantial research project. The course can accommodate a range of students, from advanced undergraduates to senior graduate students. The course can also be easily adapted to a survey of embedded processors. This paper describes the version taught at the University of Virginia and lessons learned from the experience.", "title": "" }, { "docid": "5c7a66c440b73b9ff66cd73c8efb3718", "text": "Image captioning is a crucial task in the interaction of computer vision and natural language processing. It is an important way that help human understand the world better. There are many studies on image English captioning, but little work on image Chinese captioning because of the lack of the corresponding datasets. This paper focuses on image Chinese captioning by using abundant English datasets for the issue. In this paper, a method of adding English information to image Chinese captioning is proposed. We validate the use of English information with state-of-the art performance on the datasets: Flickr8K-CN.", "title": "" }, { "docid": "780e49047bdacda9862c51338aa1397f", "text": "We consider stochastic volatility models under parameter uncertainty and investigate how model derived prices of European options are affected. We let the pricing parameters evolve dynamically in time within a specified region, and formalise the problem as a control problem where the control acts on the parameters to maximise/minimise the option value. Through a dual representation with backward stochastic differential equations, we obtain explicit equations for Heston’s model and investigate several numerical solutions thereof. In an empirical study, we apply our results to market data from the S&P 500 index where the model is estimated to historical asset prices. We find that the conservative model-prices cover 98% of the considered market-prices for a set of European call options.", "title": "" }, { "docid": "cdd43b3baa9849441817b5f31d7cb0e0", "text": "Traffic light control systems are widely used to monitor and control the flow of automobiles through the junction of many roads. They aim to realize smooth motion of cars in the transportation routes. However, the synchronization of multiple traffic light systems at adjacent intersections is a complicated problem given the various parameters involved. Conventional systems do not handle variable flows approaching the junctions. In addition, the mutual interference between adjacent traffic light systems, the disparity of cars flow with time, the accidents, the passage of emergency vehicles, and the pedestrian crossing are not implemented in the existing traffic system. This leads to traffic jam and congestion. We propose a system based on PIC microcontroller that evaluates the traffic density using IR sensors and accomplishes dynamic timing slots with different levels. Moreover, a portable controller device is designed to solve the problem of emergency vehicles stuck in the overcrowded roads.", "title": "" }, { "docid": "3886cc26572b2d82c23790ad52342222", "text": "This paper presents a quantitative human performance model of making single-stroke pen gestures within certain error constraints in terms of production time. Computed from the properties of Curves, Line segments, and Corners (CLC) in a gesture stroke, the model may serve as a foundation for the design and evaluation of existing and future gesture-based user interfaces at the basic motor control efficiency level, similar to the role of previous \"laws of action\" played to pointing, crossing or steering-based user interfaces. We report and discuss our experimental results on establishing and validating the CLC model, together with other basic empirical findings in stroke gesture production.", "title": "" }, { "docid": "6346955de2fa46e5c109ada42b4e9f77", "text": "Retinopathy of prematurity (ROP) is a disease that can cause blindness in very low birthweight infants. The incidence of ROP is closely correlated with the weight and the gestational age at birth. Despite current therapies, ROP continues to be a highly debilitating disease. Our advancing knowledge of the pathogenesis of ROP has encouraged investigations into new antivasculogenic therapies. The purpose of this article is to review the findings on the pathophysiological mechanisms that contribute to the transition between the first and second phases of ROP and to investigate new potential therapies. Oxygen has been well characterized for the key role that it plays in retinal neoangiogenesis. Low or high levels of pO2 regulate the normal or abnormal production of hypoxia-inducible factor 1 and vascular endothelial growth factors (VEGF), which are the predominant regulators of retinal angiogenesis. Although low oxygen saturation appears to reduce the risk of severe ROP when carefully controlled within the first few weeks of life, the optimal level of saturation still remains uncertain. IGF-1 and Epo are fundamentally required during both phases of ROP, as alterations in their protein levels can modulate disease progression. Therefore, rhIGF-1 and rhEpo were tested for their abilities to prevent the loss of vasculature during the first phase of ROP, whereas anti-VEGF drugs were tested during the second phase. At present, previous hypotheses concerning ROP should be amended with new pathogenetic theories. Studies on the role of genetic components, nitric oxide, adenosine, apelin and β-adrenergic receptor have revealed new possibilities for the treatment of ROP. The genetic hypothesis that single-nucleotide polymorphisms within the β-ARs play an active role in the pathogenesis of ROP suggests the concept of disease prevention using β-blockers. In conclusion, all factors that can mediate the progression from the avascular to the proliferative phase might have significant implications for the further understanding and treatment of ROP.", "title": "" }, { "docid": "e6548454f46962b5ce4c5d4298deb8e7", "text": "The use of SVM (Support Vector Machines) in detecting e-mail as spam or nonspam by incorporating feature selection using GA (Genetic Algorithm) is investigated. An GA approach is adopted to select features that are most favorable to SVM classifier, which is named as GA-SVM. Scaling factor is exploited to measure the relevant coefficients of feature to the classification task and is estimated by GA. Heavy-bias operator is introduced in GA to promote sparse in the scaling factors of features. So, feature selection is performed by eliminating irrelevant features whose scaling factor is zero. The experiment results on UCI Spam database show that comparing with original SVM classifier, the number of support vector decreases while better classification results are achieved based on GA-SVM.", "title": "" }, { "docid": "e881c2ab6abc91aa8e7cbe54d861d36d", "text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.", "title": "" }, { "docid": "1f278ddc0d643196ff584c7ea82dc89b", "text": "We consider an approximate version of a fundamental geometric search problem, polytope membership queries. Given a convex polytope P in REd, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an error bound ε. Previous solutions to this problem were based on straightforward applications of classic polytope approximation techniques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, and the latter yields constant query time. A space-time tradeoff can be obtained by interpolating between the two. We present the first significant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε(d-1)/2) to O(1/ε(d-1)/4). Our approach is based on a very simple algorithm. Both lower bounds and upper bounds on the performance of the algorithm are presented.\n To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. We show that our tradeoff provides significant improvements to the best known space-time tradeoffs for approximate nearest neighbor searching. Furthermore, this is achieved with constructions that are much simpler than existing methods.", "title": "" }, { "docid": "d2a1ecb8ad28ed5ba75460827341f741", "text": "Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense.1 The basic idea is that both word sense representation (WSR) and word sense disambiguation (WSD) will benefit from each other: (1) highquality WSR will capture rich information about words and senses, which should be helpful for WSD, and (2) high-quality WSD will provide reliable disambiguated corpora for learning better sense representations. Experimental results show that, our model improves the performance of contextual word similarity compared to existing WSR methods, outperforms stateof-the-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD.", "title": "" }, { "docid": "1043fd2e3eb677a768e922f5daf5a5d0", "text": "A transformer magnetizing current offset for a phase-shift full-bridge (PSFB) converter is dealt in this paper. A model of this current offset is derived and it is presented as a first order system having a pole at a low frequency when the effects from the parasitic components and the switching transition are considered. A digital offset compensator eliminating this current offset is proposed and designed considering the interference in an output voltage regulation. The performances of the proposed compensator are verified by experiments with a 1.2kW PSFB converter. The saturation of the transformer is prevented by this compensator.", "title": "" } ]
scidocsrr
a352dd701300a73364dde5029a62df2a
ReVision: automated classification, analysis and redesign of chart images
[ { "docid": "98b30c5056d33f4f92bedc4f2e2698ce", "text": "We present an approach for classifying images of charts based on the shape and spatial relationships of their primitives. Five categories are considered: bar-charts, curve-plots, pie-charts, scatter-plots and surface-plots. We introduce two novel features to represent the structural information based on (a) region segmentation and (b) curve saliency. The local shape is characterized using the Histograms of Oriented Gradients (HOG) and the Scale Invariant Feature Transform (SIFT) descriptors. Each image is represented by sets of feature vectors of each modality. The similarity between two images is measured by the overlap in the distribution of the features -measured using the Pyramid Match algorithm. A test image is classified based on its similarity with training images from the categories. The approach is tested with a database of images collected from the Internet.", "title": "" } ]
[ { "docid": "1cb2d77cbe4c164e0a9a9481cd268d01", "text": "Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions.", "title": "" }, { "docid": "6033682cf01008f027877e3fda4511f8", "text": "The HER-2/neu oncogene is a member of the erbB-like oncogene family, and is related to, but distinct from, the epidermal growth factor receptor. This gene has been shown to be amplified in human breast cancer cell lines. In the current study, alterations of the gene in 189 primary human breast cancers were investigated. HER-2/neu was found to be amplified from 2- to greater than 20-fold in 30% of the tumors. Correlation of gene amplification with several disease parameters was evaluated. Amplification of the HER-2/neu gene was a significant predictor of both overall survival and time to relapse in patients with breast cancer. It retained its significance even when adjustments were made for other known prognostic factors. Moreover, HER-2/neu amplification had greater prognostic value than most currently used prognostic factors, including hormonal-receptor status, in lymph node-positive disease. These data indicate that this gene may play a role in the biologic behavior and/or pathogenesis of human breast cancer.", "title": "" }, { "docid": "13eaa316c8e41a9cc3807d60ba72db66", "text": "This is a short paper introducing pitfalls when implementing averaged scores. Although, it is common to compute averaged scores, it is good to specify in detail how the scores are computed.", "title": "" }, { "docid": "16d7767e9f2216ce0789b8a92d8d65e4", "text": "In the rst genetic programming (GP) book John Koza noticed that tness histograms give a highly informative global view of the evolutionary process (Koza, 1992). The idea is further developed in this paper by discussing GP evolution in analogy to a physical system. I focus on three interrelated major goals: (1) Study the the problem of search eeort allocation in GP; (2) Develop methods in the GA/GP framework that allow adap-tive control of diversity; (3) Study ways of adaptation for faster convergence to optimal solution. An entropy measure based on phenotype classes is introduced which abstracts tness histograms. In this context, entropy represents a measure of population diversity. An analysis of entropy plots and their correlation with other statistics from the population enables an intelligent adaptation of search control.", "title": "" }, { "docid": "2794ea63eb1a24ebd1cea052345569eb", "text": "Ethernet is considered as a future communication standard for distributed embedded systems in the automotive and industrial domains. A key challenge is the deterministic low-latency transport of Ethernet frames, as many safety-critical real-time applications in these domains have tight timing requirements. Time-sensitive networking (TSN) is an upcoming set of Ethernet standards, which (among other things) address these requirements by specifying new quality of service mechanisms in the form of different traffic shapers. In this paper, we consider TSN's time-aware and peristaltic shapers and evaluate whether these shapers are able to fulfill these strict timing requirements. We present a formal timing analysis, which is a key requirement for the adoption of Ethernet in safety-critical real-time systems, to derive worst-case latency bounds for each shaper. We use a realistic automotive Ethernet setup to compare these shapers to each other and against Ethernet following IEEE 802.1Q.", "title": "" }, { "docid": "c1dbf418f72ad572b3b745a94fe8fbf7", "text": "In this work we show how to integrate prior statistical knowledge, obtained through principal components analysis (PCA), into a convolutional neural network in order to obtain robust predictions even when dealing with corrupted or noisy data. Our network architecture is trained end-to-end and includes a specifically designed layer which incorporates the dataset modes of variation discovered via PCA and produces predictions by linearly combining them. We also propose a mechanism to focus the attention of the CNN on specific regions of interest of the image in order to obtain refined predictions. We show that our method is effective in challenging segmentation and landmark localization tasks.", "title": "" }, { "docid": "d46916f82e8f6ac8f4f3cb3df1c6875f", "text": "Mobile devices are becoming the prevalent computing platform for most people. TouchDevelop is a new mobile development environment that enables anyone with a Windows Phone to create new apps directly on the smartphone, without a PC or a traditional keyboard. At the core is a new mobile programming language and editor that was designed with the touchscreen as the only input device in mind. Programs written in TouchDevelop can leverage all phone sensors such as GPS, cameras, accelerometer, gyroscope, and stored personal data such as contacts, songs, pictures. Thousands of programs have already been written and published with TouchDevelop.", "title": "" }, { "docid": "ca17638b251d20cca2973a3f551b822f", "text": "The first edition of Artificial Intelligence: A Modern Approach has become a classic in the AI literature. It has been adopted by over 600 universities in 60 countries, and has been praised as the definitive synthesis of the field. In the second edition, every chapter has been extensively rewritten. Significant new material has been introduced to cover areas such as constraint satisfaction, fast propositional inference, planning graphs, internet agents, exact probabilistic inference, Markov Chain Monte Carlo techniques, Kalman filters, ensemble learning methods, statistical learning, probabilistic natural language models, probabilistic robotics, and ethical aspects of AI. The book is supported by a suite of online resources including source code, figures, lecture slides, a directory of over 800 links to \"AI on the Web,\" and an online discussion group. All of this is available at: aima.cs.berkeley.edu.", "title": "" }, { "docid": "12840153a7f2be146a482ed78e7822a6", "text": "We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost. ! 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c5380f25f7b3005e8cbfceba9bb4bfa0", "text": "We propose an event-driven model for headline generation. Given an input document, the system identifies a key event chain by extracting a set of structural events that describe them. Then a novel multi-sentence compression algorithm is used to fuse the extracted events, generating a headline for the document. Our model can be viewed as a novel combination of extractive and abstractive headline generation, combining the advantages of both methods using event structures. Standard evaluation shows that our model achieves the best performance compared with previous state-of-the-art systems.", "title": "" }, { "docid": "20af5209de71897158820f935018d877", "text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.", "title": "" }, { "docid": "988ce34190564babadb1e3b30a0d927c", "text": "The kinetics of saccharose fermentation by Kombucha is not yet well defined due to lack of knowledge of reaction mechanisms taking place during this process. In this study, the kinetics of saccharose fermentation by Kombucha was analysed using the suggested empirical model. The data were obtained on 1.5 g L of black tea, with 66.47 g L of saccharose and using 10 or 15% (V/V) of Kombucha. The total number of viable cells was as follows: approximately 5×10 of yeast cells per mL of the inoculum and approximately 2x10 of bacteria cells per mL of the inoculum. The samples were analysed after 0, 3, 4, 5, 6, 7 and 10 days. Their pH values and contents of saccharose, glucose, fructose, total acids and ethanol were determined. A saccharose concentration model was defined as a sigmoidal function at 22 and 30 °C, and with 10 and 15% (V/V) of inoculum quantity. The determination coefficients of the functions were very high (R > 0.99). Reaction rates were calculated as first derivatives of Boltzmann’s functions. No simple correlation between the rate of reaction and independent variables (temperature and inoculum concentration) was found. Analysis of the empirical model indicated that saccharose fermentation by Kombucha occurred according to very complex kinetics.", "title": "" }, { "docid": "ea64ba0b1c3d4ed506fb3605893fef92", "text": "We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition.", "title": "" }, { "docid": "58e0b66d55ca7f5571f4f55d8fcf822c", "text": "Events of various kinds are mentioned and discussed in text documents, whether they are books, news articles, blogs or microblog feeds. The paper starts by giving an overview of how events are treated in linguistics and philosophy. We follow this discussion by surveying how events and associated information are handled in computationally. In particular, we look at how textual documents can be mined to extract events and ancillary information. These days, it is mostly through the application of various machine learning techniques. We also discuss applications of event detection and extraction systems, particularly in summarization, in the medical domain and in the context of Twitter posts. We end the paper with a discussion of challenges and future directions.", "title": "" }, { "docid": "c6de5f33ca775fb42db4667b0dcc74bf", "text": "Robotic-assisted laparoscopic prostatectomy is a surgical procedure performed to eradicate prostate cancer. Use of robotic assistance technology allows smaller incisions than the traditional laparoscopic approach and results in better patient outcomes, such as less blood loss, less pain, shorter hospital stays, and better postoperative potency and continence rates. This surgical approach creates unique challenges in patient positioning for the perioperative team because the patient is placed in the lithotomy with steep Trendelenburg position. Incorrect positioning can lead to nerve damage, pressure ulcers, and other complications. Using a special beanbag positioning device made specifically for use with this severe position helps prevent these complications.", "title": "" }, { "docid": "81086098b7516e9f03559aa8b99df90e", "text": "Abstractive text summarization aims to shorten long text documents into a human readable form that contains the most important facts from the original document. However, the level of actual abstraction as measured by novel phrases that do not appear in the source document remains low in existing approaches. We propose two techniques to improve the level of abstraction of generated summaries. First, we decompose the decoder into a contextual network that retrieves relevant parts of the source document, and a pretrained language model that incorporates prior knowledge about language generation. Second, we propose a novelty metric that is optimized directly through policy learning to encourage the generation of novel phrases. Our model achieves results comparable to state-of-the-art models, as determined by ROUGE scores and human evaluations, while achieving a significantly higher level of abstraction as measured by n-gram overlap with the source document.ive text summarization aims to shorten long text documents into a human readable form that contains the most important facts from the original document. However, the level of actual abstraction as measured by novel phrases that do not appear in the source document remains low in existing approaches. We propose two techniques to improve the level of abstraction of generated summaries. First, we decompose the decoder into a contextual network that retrieves relevant parts of the source document, and a pretrained language model that incorporates prior knowledge about language generation. Second, we propose a novelty metric that is optimized directly through policy learning to encourage the generation of novel phrases. Our model achieves results comparable to state-of-the-art models, as determined by ROUGE scores and human evaluations, while achieving a significantly higher level of abstraction as measured by n-gram overlap with the source document.", "title": "" }, { "docid": "befbfb5b083cddb7fb43ebaa8df244c1", "text": "The aim of this study was to adapt and validate the Spanish version of the Sport Motivation Scale-II (S-SMS-II) in adolescent athletes. The sample included 766 Spanish adolescents (263 females and 503 males; average age = 13.71 ± 1.30 years old). The methodological steps established by the International Test Commission were followed. Four measurement models were compared employing the maximum likelihood estimation (with six, five, three, and two factors). Then, factorial invariance analyses were conducted and the effect sizes were calculated. Finally, the reliability was calculated using Cronbach's alpha, omega, and average variance extracted coefficients. The five-factor S-SMS-II showed the best indices of fit (Cronbach's alpha .64 to .74; goodness of fit index .971, root mean square error of approximation .044, comparative fit index .966). Factorial invariance was also verified across gender and between sport-federated athletes and non-federated athletes. The proposed S-SMS-II is discussed according to previous validated versions (English, Portuguese, and Chinese).", "title": "" }, { "docid": "021789cea259697f236986028218e3f6", "text": "In the IT world of corporate networking, how businesses store and compute data is starting to shift from in-house servers to the cloud. However, some enterprises are still hesitant to make this leap to the cloud because of their information security and data privacy concerns. Enterprises that want to invest into this service need to feel confident that the information stored on the cloud is secure. Due to this need for confidence, trust is one of the major qualities that cloud service providers (CSPs) must build for cloud service users (CSUs). To do this, a model that all CSPs can follow must exist to establish a trust standard in the industry. If no concrete model exists, the future of cloud computing will be stagnant. This paper presents a new trust model that involves all the cloud stakeholders such as CSU, CSP, and third-party auditors. Our proposed trust model is objective since it involves third-party auditors to develop unbiased trust between the CSUs and the CSPs. Furthermore, to support the implementation of the proposed trust model, we rank CSPs according to the trust-values obtained from the trust model. The final score for each participating CSP will be determined based on the third-party assessment and the feedback received from the CSUs.", "title": "" }, { "docid": "67b41c7c37f0e497d2019399c0a87af9", "text": "RAYNAUD’S Disease is a vasospastic disorder affecting primarily the distal resistance vessels. The disease is typically characterized by the abrupt onset of digital pallor or cyanosis in response to cold exposure or stress. Raynaud’s Disease may occur independently or be associated with other conditions (systemic lupus erythematosus and scleroderma) and connective tissue diseases. Initial symptoms may include a burning sensation in the affected area accompanied by allodynia and painful paresthesias with vasomotor (cold, cyanotic) changes. Ultimately, as the ischemia becomes more chronic, this condition may progress to amputation of the affected digits. The most common indication for spinal cord stimulation in the United States is for chronic painful neuropathies. However, in Europe, spinal cord stimulation is frequently used to treat ischemic conditions, such as peripheral vascular disease and coronary occlusive disease. Although technically an off-label indication in the United States, this practice is supported by many published studies. There have also been case reports of its use in other diseases resulting in arterial insufficiency to the extremities, such as thromboangiitis obliterans (Buerger’s Disease), but its use in Raynaud’s Disease is relatively underreported. This case describes the use of cervical spinal cord stimulation to treat refractory digital ischemia in a patient with advanced Raynaud’s Disease.", "title": "" }, { "docid": "6c97853046dd2673d9c83990119ef43c", "text": "Atomic actions (or transactions) are useful for coping with concurrency and failures. One way of ensuring atomicity of actions is to implement applications in terms of atomic data types: abstract data types whose objects ensure serializability and recoverability of actions using them. Many atomic types can be implemented to provide high levels of concurrency by taking advantage of algebraic properties of the type's operations, for example, that certain operations commute. In this paper we analyze the level of concurrency permitted by an atomic type. We introduce several local constraints on individual objects that suffice to ensure global atomicity of actions; we call these constraints local atomicity properties. We present three local atomicity properties, each of which is optimal: no strictly weaker local constraint on objects suffices to ensure global atomicity for actions. Thus, the local atomicity properties define precise limits on the amount of concurrency that can be permitted by an atomic type.", "title": "" } ]
scidocsrr
912991cba9804e1d19cdac74ab16bdd1
Sliding-mode controller for four-wheel-steering vehicle: Trajectory-tracking problem
[ { "docid": "6fdee3d247a36bc7d298a7512a11118a", "text": "Fully automatic driving is emerging as the approach to dramatically improve efficiency (throughput per unit of space) while at the same time leading to the goal of zero accidents. This approach, based on fully automated vehicles, might improve the efficiency of road travel in terms of space and energy used, and in terms of service provided as well. For such automated operation, trajectory planning methods that produce smooth trajectories, with low level associated accelerations and jerk for providing human comfort, are required. This paper addresses this problem proposing a new approach that consists of introducing a velocity planning stage in the trajectory planner. Moreover, this paper presents the design and simulation evaluation of trajectory-tracking and path-following controllers for autonomous vehicles based on sliding mode control. A new design of sliding surface is proposed, such that lateral and angular errors are internally coupled with each other (in cartesian space) in a sliding surface leading to convergence of both variables.", "title": "" } ]
[ { "docid": "2c91e6ca6cf72279ad084c4a51b27b1c", "text": "Knowing where the host lane lies is paramount to the effectiveness of many advanced driver assistance systems (ADAS), such as lane keep assist (LKA) and adaptive cruise control (ACC). This paper presents an approach for improving lane detection based on the past trajectories of vehicles. Instead of expensive high-precision map, we use the vehicle trajectory information to provide additional lane-level spatial support of the traffic scene, and combine it with the visual evidence to improve each step of the lane detection procedure, thereby overcoming typical challenges of normal urban streets. Such an approach could serve as an Add-On to enhance the performance of existing lane detection systems in terms of both accuracy and robustness. Experimental results in various typical but challenging scenarios show the effectiveness of the proposed system.", "title": "" }, { "docid": "46360fec3d7fa0adbe08bb4b5bb05847", "text": "Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to state-of-the-art results.", "title": "" }, { "docid": "363c1ecd086043311f16b53b20778d51", "text": "One recent development of cultural globalization emerges in the convergence of taste in media consumption within geo-cultural regions, such as Latin American telenovelas, South Asian Bollywood films and East Asian trendy dramas. Originating in Japan, the so-called trendy dramas (or idol dramas) have created a craze for Japanese commodities in its neighboring countries (Ko, 2004). Following this Japanese model, Korea has also developed as a stronghold of regional exports, ranging from TV programs, movies and pop music to food, fashion and tourism. The fondness for all things Japanese and Korean in East Asia has been vividly captured by such buzz phrases as Japan-mania (hari in Chinese) and the Korean wave (hallyu in Korean and hanliu in Chinese). These two phenomena underscore how popular culture helps polish the image of a nation and thus strengthens its economic competitiveness in the global market. Consequently, nationbranding has become incorporated into the project of nation-building in light of globalization. However, Japan’s cultural spread and Korea’s cultural expansion in East Asia are often analysed from angles that are polar opposites. Scholars suggest that Japan-mania is initiated by the ardent consumers of receiving countries (Nakano, 2002), while the Korea wave is facilitated by the Korean state in order to boost its culture industry (Ryoo, 2008). Such claims are legitimate but neglect the analogues of these two phenomena. This article examines the parallel paths through which Japan-mania and the Korean wave penetrate into people’s everyday practices in Taiwan – arguably one of the first countries to be swept by these two trends. My aim is to illuminate the processes in which nation-branding is not only promoted by a nation as an international marketing strategy, but also appropriated by a receiving country as a pattern of consumption. Three seemingly contradictory arguments explain why cultural products ‘sell’ across national borders: cultural transparency, cultural difference and hybridization. First, cultural exports targeting the global market are rarely culturally specific so that they allow worldwide audiences to ‘project [into them] indigenous values, beliefs, rites, and rituals’ Media, Culture & Society 33(1) 3 –18 © The Author(s) 2011 Reprints and permission: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0163443710379670 mcs.sagepub.com", "title": "" }, { "docid": "72a1798a864b4514d954e1e9b6089ad8", "text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.", "title": "" }, { "docid": "01edfc6eb157dc8cf2642f58cf3aba25", "text": "Understanding developmental processes, especially in non-model crop plants, is extremely important in order to unravel unique mechanisms regulating development. Chickpea (C. arietinum L.) seeds are especially valued for their high carbohydrate and protein content. Therefore, in order to elucidate the mechanisms underlying seed development in chickpea, deep sequencing of transcriptomes from four developmental stages was undertaken. In this study, next generation sequencing platform was utilized to sequence the transcriptome of four distinct stages of seed development in chickpea. About 1.3 million reads were generated which were assembled into 51,099 unigenes by merging the de novo and reference assemblies. Functional annotation of the unigenes was carried out using the Uniprot, COG and KEGG databases. RPKM based digital expression analysis revealed specific gene activities at different stages of development which was validated using Real time PCR analysis. More than 90% of the unigenes were found to be expressed in at least one of the four seed tissues. DEGseq was used to determine differentially expressing genes which revealed that only 6.75% of the unigenes were differentially expressed at various stages. Homology based comparison revealed 17.5% of the unigenes to be putatively seed specific. Transcription factors were predicted based on HMM profiles built using TF sequences from five legume plants and analyzed for their differential expression during progression of seed development. Expression analysis of genes involved in biosynthesis of important secondary metabolites suggested that chickpea seeds can serve as a good source of antioxidants. Since transcriptomes are a valuable source of molecular markers like simple sequence repeats (SSRs), about 12,000 SSRs were mined in chickpea seed transcriptome and few of them were validated. In conclusion, this study will serve as a valuable resource for improved chickpea breeding.", "title": "" }, { "docid": "b4978b2fbefc79fba6e69ad8fd55ebf9", "text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.", "title": "" }, { "docid": "9c349ef0f3a48eaeaf678b8730d4b82c", "text": "This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects’ eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects’ eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved. Keywords—Biometric, EEG, Wavelet Packet Decomposition, Neural Networks", "title": "" }, { "docid": "de2ed315762d3f0ac34fe0b77567b3a2", "text": "A study in vitro of specimens of human aortic and common carotid arteries was carried out to determine the feasibility of direct measurement (i.e., not from residual lumen) of arterial wall thickness with B mode real-time imaging. Measurements in vivo by the same technique were also obtained from common carotid arteries of 10 young normal male subjects. Aortic samples were classified as class A (relatively normal) or class B (with one or more atherosclerotic plaques). In all class A and 85% of class B arterial samples a characteristic B mode image composed of two parallel echogenic lines separated by a hypoechoic space was found. The distance between the two lines (B mode image of intimal + medial thickness) was measured and correlated with the thickness of different combinations of tunicae evaluated by gross and microscopic examination. On the basis of these findings and the results of dissection experiments on the intima and adventitia we concluded that results of B mode imaging of intimal + medial thickness did not differ significantly from the intimal + medial thickness measured on pathologic examination. With respect to the accuracy of measurements obtained by B mode imaging as compared with pathologic findings, we found an error of less than 20% for measurements in 77% of normal and pathologic aortic walls. In addition, no significant difference was found between B mode-determined intimal + medial thickness in the common carotid arteries evaluated in vitro and that determined by this method in vivo in young subjects, indicating that B mode imaging represents a useful approach for the measurement of intimal + medial thickness of human arteries in vivo.", "title": "" }, { "docid": "67dedca1dbdf5845b32c74e17fc42eb6", "text": "How much trust a user places in a recommender is crucial to the uptake of the recommendations. Although prior work established various factors that build and sustain user trust, their comparative impact has not been studied in depth. This paper presents the results of a crowdsourced study examining the impact of various recommendation interfaces and content selection strategies on user trust. It evaluates the subjective ranking of nine key factors of trust grouped into three dimensions and examines the differences observed with respect to users' personality traits.", "title": "" }, { "docid": "fec3feb40d363535955a9ac4234c4126", "text": "This article presents metrics from two Hewlett-Packard (HP) reuse programs that document the improved quality, increased productivity, shortened time-to-market, and enhanced economics resulting from reuse. Work products are the products or by-products of the software-development process: for example, code, design, and test plans. Reuse is the use of these work products without modification in the development of other software. Leveraged reuse is modifying existing work products to meet specific system requirements. A producer is a creator of reusable work products, and the consumer is someone who uses them to create other software. Time-to-market is the time it takes to deliver a product from the time it is conceived. Experience with reuse has been largely positive. Because work products are used multiple times, the accumulated defect fixes result in a higher quality work product. Because the work products have already been created, tested, and documented, productivity increases because consumers of reusable work products need to do less work. However, increased productivity from reuse does not necessarily shorten time-to-market. To reduce time-to-market, reuse must be used effectively on the critical path of a development project. Finally, we have found that reuse allows an organization to use personnel more effectively because it leverages expertise. However, software reuse is not free. It requires resources to create and maintain reusable work products, a reuse library, and reuse tools. To help evaluate the costs and benefits of reuse, we have developed an economic analysis method, which we have applied to multiple reuse programs at HP.<<ETX>>", "title": "" }, { "docid": "bb13ad5b41abbf80f7e7c70a9098cd15", "text": "OBJECTIVE\nThis study assessed the psychological distress in Spanish college women and analyzed it in relation to sociodemographic and academic factors.\n\n\nPARTICIPANTS AND METHODS\nThe authors selected a stratified random sampling of 1,043 college women (average age of 22.2 years). Sociodemographic and academic information were collected, and psychological distress was assessed with the Symptom Checklist-90-Revised.\n\n\nRESULTS\nThis sample of college women scored the highest on the depression dimension and the lowest on the phobic anxiety dimension. The sample scored higher than women of the general population on the dimensions of obsessive-compulsive, interpersonal sensitivity, paranoid ideation, psychoticism, and on the Global Severity Index. Scores in the sample significantly differed based on age, relationship status, financial independence, year of study, and area of study.\n\n\nCONCLUSION\nThe results indicated an elevated level of psychological distress among college women, and therefore college health services need to devote more attention to their mental health.", "title": "" }, { "docid": "69d32f5e6a6612770cd50b20e5e7f802", "text": "In this paper we present an approach for efficiently retrieving the most similar image, based on point-to-point correspondences, within a sequence that has been acquired through continuous camera movement. Our approach is entailed to the use of standardized binary feature descriptors and exploits the temporal form of the input data to dynamically adapt the search structure. While being straightforward to implement, our method exhibits very fast response times and its Precision/Recall rates compete with state of the art approaches. Our claims are supported by multiple large scale experiments on publicly available datasets.", "title": "" }, { "docid": "6e00567c5c33d899af9b5a67e37711a3", "text": "The adoption of cloud computing facilities and programming models differs vastly between different application domains. Scalable web applications, low-latency mobile backends and on-demand provisioned databases are typical cases for which cloud services on the platform or infrastructure level exist and are convincing when considering technical and economical arguments. Applications with specific processing demands, including high-performance computing, high-throughput computing and certain flavours of scientific computing, have historically required special configurations such as computeor memory-optimised virtual machine instances. With the rise of function-level compute instances through Function-as-a-Service (FaaS) models, the fitness of generic configurations needs to be re-evaluated for these applications. We analyse several demanding computing tasks with regards to how FaaS models compare against conventional monolithic algorithm execution. Beside the comparison, we contribute a refined FaaSification process for legacy software and provide a roadmap for future work. 1 Research Direction The ability to turn programmed functions or methods into ready-to-use cloud services is leading to a seemingly serverless development and deployment experience for application software engineers [1]. Without the necessity to allocate resources beforehand, prototyping new features and workflows becomes faster and more convenient to application service providers. These advantages have given boost to an industry trend consequently called Serverless Computing. The more precise, almost overlapping term in accordance with Everything-asa-Service (XaaS) cloud computing taxonomies is Function-as-a-Service (FaaS) [4]. In the FaaS layer, functions, either on the programming language level or as abstract concept around binary implementations, are executed synchronously or asynchronously through multi-protocol triggers. Function instances are provisioned on demand through coldstart or warmstart of the implementation in conjunction with an associated configuration in few milliseconds, elastically scaled as needed, and charged per invocation and per product of period of time and resource usage, leading to an almost perfect pay-as-you-go utility pricing model [11]. FaaS is gaining traction primarily in three areas. First, in Internet-of-Things applications where connected devices emit data sporadically. Second, for web applications with light-weight backend tasks. Third, as glue code between other cloud computing services. In contrast to the industrial popularity, no work is known to us which explores its potential for scientific and high-performance computing applications with more demanding execution requirements. From a cloud economics and strategy perspective, FaaS is a refinement of the platform layer (PaaS) with particular tools and interfaces. Yet from a software engineering and deployment perspective, functions are complementing other artefact types which are deployed into PaaS or underlying IaaS environments. Fig. 1 explains this positioning within the layered IaaS, PaaS and SaaS service classes, where the FaaS runtime itself is subsumed under runtime stacks. Performing experimental or computational science research with FaaS implies that the two roles shown, end user and application engineer, are adopted by a single researcher or a team of researchers, which is the setting for our research. Fig. 1. Positioning of FaaS in cloud application development The necessity to conduct research on FaaS for further application domains stems from the unique execution characteristics. Service instances are heuristically stateless, ephemeral, and furthermore limited in resource allotment and execution time. They are moreover isolated from each other and from the function management and control plane. In public commercial offerings, they are billed in subsecond intervals and terminated after few minutes, but as with any cloud application, private deployments are also possible. Hence, there is a trade-off between advantages and drawbacks which requires further analysis. For example, existing parallelisation frameworks cannot easily be used at runtime as function instances can only, in limited ways, invoke other functions without the ability to configure their settings. Instead, any such parallelisation needs to be performed before deployment with language-specific tools such as Pydron for Python [10] or Calvert’s compiler for Java [3]. For resourceand time-demanding applications, no special-purpose FaaS instances are offered by commercial cloud providers. This is a surprising observation given the multitude of options in other cloud compute services beyond general-purpose offerings, especially on the infrastructure level (IaaS). These include instance types optimised for data processing (with latest-generation processors and programmable GPUs), for memory allocation, and for non-volatile storage (with SSDs). Amazon Web Services (AWS) alone offers 57 different instance types. Our work is therefore concerned with the assessment of how current generic one-size-fits-all FaaS offerings handle scientific computing workloads, whether the proliferation of specialised FaaS instance types can be expected and how they would differ from commonly offered IaaS instance types. In this paper, we contribute specifically (i) a refined view on how software can be made fitting into special-purpose FaaS contexts with a high degree of automation through a process named FaaSification, and (ii) concepts and tools to execute such functions in constrained environments. In the remainder of the paper, we first present background information about FaaS runtimes, including our own prototypes which allow for providerindependent evaluations. Subsequently, we present four domain-specific scientific experiments conducted using FaaS to gain broad knowledge about resource requirements beyond general-purpose instances. We summarise the findings and reason about the implications for future scientific computing infrastructures. 2 Background on Function-as-a-Service 2.1 Programming Models and Runtimes The characteristics of function execution depend primarily on the FaaS runtime in use. There are broadly three categories of runtimes: 1. Proprietary commercial services, such as AWS Lambda, Google Cloud Functions, Azure Functions and Oracle Functions. 2. Open source alternatives with almost matching interfaces and functionality, such as Docker-LambCI, Effe, Google Cloud Functions Emulator and OpenLambda [6], some of which focus on local testing rather than operation. 3. Distinct open source implementations with unique designs, such as Apache OpenWhisk, Kubeless, IronFunctions and Fission, some of which are also available as commercial services, for instance IBM Bluemix OpenWhisk [5]. The uniqueness is a consequence of the integration with other cloud stacks (Kubernetes, OpenStack), the availability of web and command-line interfaces, the set of triggers and the level of isolation in multi-tenant operation scenarios, which is often achieved through containers. In addition, due to the often non-trivial configuration of these services, a number of mostly service-specific abstraction frameworks have become popular among developers, such as PyWren, Chalice, Zappa, Apex and the Serverless Framework [8]. The frameworks and runtimes differ in their support for programming languages, but also in the function signatures, parameters and return values. Hence, a comparison of the entire set of offerings requires a baseline. The research in this paper is congruously conducted with the mentioned commercial FaaS providers as well as with our open-source FaaS tool Snafu which allows for managing, executing and testing functions across provider-specific interfaces [14]. The service ecosystem relationship between Snafu and the commercial FaaS providers is shown in Fig. 2. Snafu is able to import services from three providers (AWS Lambda, IBM Bluemix OpenWhisk, Google Cloud Functions) and furthermore offers a compatible control plane to all three of them in its current implementation version. At its core, it contains a modular runtime environment with prototypical maturity for functions implemented in JavaScript, Java, Python and C. Most importantly, it enables repeatable research as it can be deployed as a container, in a virtual machine or on a bare metal workstation. Notably absent from the categories above are FaaS offerings in e-science infrastructures and research clouds, despite the programming model resembling widely used job submission systems. We expect our practical research contributions to overcome this restriction in a vendor-independent manner. Snafu, for instance, is already available as an alpha-version launch profile in the CloudLab testbed federated across several U.S. installations with a total capacity of almost 15000 cores [12], as well as in EGI’s federated cloud across Europe. Fig. 2. Snafu and its ecosystem and tooling Using Snafu, it is possible to adhere to the diverse programming conventions and execution conditions at commercial services while at the same time controlling and lifting the execution restrictions as necessary. In particular, it is possible to define memory-optimised, storage-optimised and compute-optimised execution profiles which serve to conduct the anticipated research on generic (general-purpose) versus specialised (special-purpose) cloud offerings for scientific computing. Snafu can execute in single process mode as well as in a loadbalancing setup where each request is forwarded by the master instance to a slave instance which in turn executes the function natively, through a languagespecific interpreter or through a container. Table 1 summarises the features of selected FaaS runtimes. Table 1. FaaS runtimes and their features Runtime Languages Programming model Import/Export AWS Lambda JavaScript, Python, Java, C# Lambda – Google Cloud Functions JavaScrip", "title": "" }, { "docid": "057621c670a9b7253ba829210c530dca", "text": "Actual challenges in production are individualization and short product lifecycles. To achieve this, the product development and the production planning must be accelerated. In some cases specialized production machines are engineered for automating production processes for a single product. Regarding the engineering of specialized production machines, there is often a sequential process starting with the mechanics, proceeding with the electrics and ending with the automation design. To accelerate this engineering process the different domains have to be parallelized as far as possible (Schlögl, 2008). Thereby the different domains start detailing in parallel after the definition of a common concept. The system integration follows the detailing with the objective to verify the system including the PLC-code. Regarding production machines, the system integration is done either by commissioning of the real machine or by validating the PLCcode against a model of the machine, so called virtual commissioning.", "title": "" }, { "docid": "a6499aad878777373006742778145ddb", "text": "The very term 'Biotechnology' elicits a range of emotions, from wonder and awe to downright fear and hostility. This is especially true among non-scientists, particularly in respect of agricultural and food biotechnology. These emotions indicate just how poorly understood agricultural biotechnology is and the need for accurate, dispassionate information in the public sphere to allow a rational public debate on the actual, as opposed to the perceived, risks and benefits of agricultural biotechnology. This review considers first the current state of public knowledge on agricultural biotechnology, and then explores some of the popular misperceptions and logical inconsistencies in both Europe and North America. I then consider the problem of widespread scientific illiteracy, and the role of the popular media in instilling and perpetuating misperceptions. The impact of inappropriate efforts to provide 'balance' in a news story, and of belief systems and faith also impinges on public scientific illiteracy. Getting away from the abstract, we explore a more concrete example of the contrasting approach to agricultural biotechnology adoption between Europe and North America, in considering divergent approaches to enabling coexistence in farming practices. I then question who benefits from agricultural biotechnology. Is it only the big companies, or is it society at large--and the environment--also deriving some benefit? Finally, a crucial aspect in such a technologically complex issue, ordinary and intelligent non-scientifically trained consumers cannot be expected to learn the intricacies of the technology to enable a personal choice to support or reject biotechnology products. The only reasonable and pragmatic alternative is to place trust in someone to provide honest advice. But who, working in the public interest, is best suited to provide informed and accessible, but objective, advice to wary consumers?", "title": "" }, { "docid": "b86ab15486581bbf8056e4f1d30eb4e5", "text": "Existing peer-to-peer publish-subscribe systems rely on structured-overlays and rendezvous nodes to store and relay group membership information. While conceptually simple, this design incurs the significant cost of creating and maintaining rigid-structures and introduces hotspots in the system at nodes that are neither publishers nor subscribers. In this paper, we introduce Quasar, a rendezvous-less probabilistic publish-subscribe system that caters to the specific needs of social networks. It is designed to handle social networks of many groups; on the order of the number of users in the system. It creates a routing infrastructure based on the proactive dissemination of highly aggregated routing vectors to provide anycast-like directed walks in the overlay. This primitive, when coupled with a novel mechanism for dynamically negating routes, enables scalable and efficient group-multicast that obviates the need for structure and rendezvous nodes. We examine the feasibility of this approach and show in a large-scale simulation that the system is scalable and efficient.", "title": "" }, { "docid": "e2f6cd2a6b40c498755e0daf98cead19", "text": "According to an estimate several billion smart devices will be connected to the Internet by year 2020. This exponential increase in devices is a challenge to the current Internet architecture, where connectivity is based on host-to-host communication. Information-Centric Networking is a novel networking paradigm in which data is addressed by its name instead of location. Several ICN architecture proposals have emerged from research communities to address challenges introduced by the current Internet Protocol (IP) regarding e.g. scalability. Content-Centric Networking (CCN) is one of the proposals. In this paper we present a way to use CCN in an Internet of Things (IoT) context. We quantify the benefits from hierarchical content naming, transparent in-network caching and other information-centric networking characteristics in a sensor environment. As a proof of concept we implemented a presentation bridge for a home automation system that provides services to the network through CCN.", "title": "" }, { "docid": "3a314a72ea2911844a5a3462d052f4e7", "text": "While increasing income inequality in China has been commented on and studied extensively, relatively little analysis is available on inequality in other dimensions of human development. Using data from different sources, this paper presents some basic facts on the evolution of spatial inequalities in education and healthcare in China over the long run. In the era of economic reforms, as the foundations of education and healthcare provision have changed, so has the distribution of illiteracy and infant mortality. Across provinces and within provinces, between rural and urban areas and within rural and urban areas, social inequalities have increased substantially since the reforms began.", "title": "" }, { "docid": "6d41b17506d0e8964f850c065b9286cb", "text": "Representation learning is a key issue for most Natural Language Processing (NLP) tasks. Most existing representation models either learn little structure information or just rely on pre-defined structures, leading to degradation of performance and generalization capability. This paper focuses on learning both local semantic and global structure representations for text classification. In detail, we propose a novel Sandwich Neural Network (SNN) to learn semantic and structure representations automatically without relying on parsers. More importantly, semantic and structure information contribute unequally to the text representation at corpus and instance level. To solve the fusion problem, we propose two strategies: Adaptive Learning Sandwich Neural Network (AL-SNN) and Self-Attention Sandwich Neural Network (SA-SNN). The former learns the weights at corpus level, and the latter further combines attention mechanism to assign the weights at instance level. Experimental results demonstrate that our approach achieves competitive performance on several text classification tasks, including sentiment analysis, question type classification and subjectivity classification. Specifically, the accuracies are MR (82.1%), SST-5 (50.4%), TREC (96%) and SUBJ (93.9%).", "title": "" }, { "docid": "06f1c7daafcf59a8eb2ddf430d0d7f18", "text": "OBJECTIVES\nWe aimed to evaluate the efficacy of reinforcing short-segment pedicle screw fixation with polymethyl methacrylate (PMMA) vertebroplasty in patients with thoracolumbar burst fractures.\n\n\nMETHODS\nWe enrolled 70 patients with thoracolumbar burst fractures for treatment with short-segment pedicle screw fixation. Fractures in Group A (n = 20) were reinforced with PMMA vertebroplasty during surgery. Group B patients (n = 50) were not treated with PMMA vertebroplasty. Kyphotic deformity, anterior vertebral height, instrument failure rates, and neurological function outcomes were compared between the two groups.\n\n\nRESULTS\nKyphosis correction was achieved in Group A (PMMA vertebroplasty) and Group B (Group A, 6.4 degrees; Group B, 5.4 degrees). At the end of the follow-up period, kyphosis correction was maintained in Group A but lost in Group B (Group A, 0.33-degree loss; Group B, 6.20-degree loss) (P = 0.0001). After surgery, greater anterior vertebral height was achieved in Group A than in Group B (Group A, 12.9%; Group B, 2.3%) (P < 0.001). During follow-up, anterior vertebral height was maintained only in Group A (Group A, 0.13 +/- 4.06%; Group B, -6.17 +/- 1.21%) (P < 0.001). Patients in both Groups A and B demonstrated good postoperative Denis Pain Scale grades (P1 and P2), but Group A had better results than Group B in terms of the control of severe and constant pain (P4 and P5) (P < 0.001). The Frankel Performance Scale scores increased by nearly 1 in both Groups A and B. Group B was subdivided into Group B1 and B2. Group B1 consisted of patients who experienced instrument failure, including screw pullout, breakage, disconnection, and dislodgement (n = 11). Group B2 comprised patients from Group B who did not experience instrument failure (n = 39). There were no instrument failures among patients in Group A. Preoperative kyphotic deformity was greater in Group B1 (23.5 +/- 7.9 degrees) than in Group B2 (16.8 +/- 8.40 degrees), P < 0.05. Severe and constant pain (P4 and P5) was noted in 36% of Group B1 patients (P < 0.001), and three of these patients required removal of their implants.\n\n\nCONCLUSION\nReinforcement of short-segment pedicle fixation with PMMA vertebroplasty for the treatment of patients with thoracolumbar burst fracture may achieve and maintain kyphosis correction, and it may also increase and maintain anterior vertebral height. Good Denis Pain Scale grades and improvement in Frankel Performance Scale scores were found in patients without instrument failure (Groups A and B2). Patients with greater preoperative kyphotic deformity had a higher risk of instrument failure if they did not undergo reinforcement with vertebroplasty. PMMA vertebroplasty offers immediate spinal stability in patients with thoracolumbar burst fractures, decreases the instrument failure rate, and provides better postoperative pain control than without vertebroplasty.", "title": "" } ]
scidocsrr
1f643c73d0e44f38714f008d04c8ec66
Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items
[ { "docid": "9f635d570b827d68e057afcaadca791c", "text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.", "title": "" }, { "docid": "b17fdc300edc22ab855d4c29588731b2", "text": "Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.", "title": "" } ]
[ { "docid": "d967d6525cf88d498ecc872a9eef1c7c", "text": "Historical Chinese character recognition has been suffering from the problem of lacking sufficient labeled training samples. A transfer learning method based on Convolutional Neural Network (CNN) for historical Chinese character recognition is proposed in this paper. A CNN model L is trained by printed Chinese character samples in the source domain. The network structure and weights of model L are used to initialize another CNN model T, which is regarded as the feature extractor and classifier in the target domain. The model T is then fine-tuned by a few labeled historical or handwritten Chinese character samples, and used for final evaluation in the target domain. Several experiments regarding essential factors of the CNNbased transfer learning method are conducted, showing that the proposed method is effective.", "title": "" }, { "docid": "6ba537ef9dd306a3caaba63c2b48c222", "text": "A lumped-element circuit is proposed to model a coplanar waveguide (CPW) interdigital capacitor (IDC). Closed-form expressions suitable for CAD purposes are given for each element in the circuit. The obtained results for the series capacitance are in good agreement with those available in the literature. In addition, the scattering parameters obtained from the circuit model are compared with those obtained using the full-wave method of moments (MoM) and good agreement is obtained. Moreover, a multilayer feed-forward artificial neural network (ANN) is developed to model the capacitance of the CPW IDC. It is shown that the developed ANN has successfully learned the required task of evaluating the capacitance of the IDC. © 2005 Wiley Periodicals, Inc. Int J RF and Microwave CAE 15: 551–559, 2005.", "title": "" }, { "docid": "fa7f85468843b519a0476dd969e72512", "text": "We trained a Multimodal Recurrent Neural Network on Flickr30K dataset with Chinese sentences. The RNN model is from Karpathy and Fei-Fei, 2015 [6]. As Chinese sentence has no space between words, we implemented the model on Flickr30 dataset in two methods. In the first setting, we tokenized each Chinese sentence into a list of words and feed them to the RNN. While in the second one, we split each Chinese sentence into a list of characters and feed them into the same model. We compared the BLEU score achieved by our two methods to that achieved by [6]. We found that the RNN model trained with char-level method for Chinese captions outperforms the word-level one. The former method performs very close to that trained on English captions by [6]. This came to a conclusion that the RNN model works universally well, or at least the same, for image caption system on different languages.", "title": "" }, { "docid": "af0097bec55577049b08f2bc9e65dd4d", "text": "The recent surge in using social media has created a massive amount of unstructured textual complaints about products and services. However, discovering and quantifying potential product defects from large amounts of unstructured text is a nontrivial task. In this paper, we develop a probabilistic defect model (PDM) that identifies the most critical product issues and corresponding product attributes, simultaneously. We facilitate domain-oriented key attributes (e.g., product model, year of production, defective components, symptoms, etc.) of a product to identify and acquire integral information of defect. We conduct comprehensive evaluations including quantitative evaluations and qualitative evaluations to ensure the quality of discovered information. Experimental results demonstrate that our proposed model outperforms existing unsupervised method (K-Means Clustering), and could find more valuable information. Our research has significant managerial implications for mangers, manufacturers, and policy makers. [Category: Data and Text Mining]", "title": "" }, { "docid": "380fdee23bebf16b05ce7caebd6edac4", "text": "Automatic detection of emotions has been evaluated using standard Mel-frequency Cepstral Coefficients, MFCCs, and a variant, MFCC-low, calculated between 20 and 300 Hz, in order to model pitch. Also plain pitch features have been used. These acoustic features have all been modeled by Gaussian mixture models, GMMs, on the frame level. The method has been tested on two different corpora and languages; Swedish voice controlled telephone services and English meetings. The results indicate that using GMMs on the frame level is a feasible technique for emotion classification. The two MFCC methods have similar performance, and MFCC-low outperforms the pitch features. Combining the three classifiers significantly improves performance.", "title": "" }, { "docid": "6d8e6be6a36d30ed2c18e3b80197ea44", "text": "The hash symbol, called a hashtag, is used to mark the keyword or topic in a tweet. It was created organically by users as a way to categorize messages. Hashtags also provide valuable information for many research applications such as sentiment classification and topic analysis. However, only a small number of tweets are manually annotated. Therefore, an automatic hashtag recommendation method is needed to help users tag their new tweets. Previous methods mostly use conventional machine learning classifiers such as SVM or utilize collaborative filtering technique. A bottleneck of these approaches is that they all use the TF-IDF scheme to represent tweets and ignore the semantic information in tweets. In this paper, we also regard hashtag recommendation as a classification task but propose a novel recurrent neural network model to learn vector-based tweet representations to recommend hashtags. More precisely, we use a skip-gram model to generate distributed word representations and then apply a convolutional neural network to learn semantic sentence vectors. Afterwards, we make use of the sentence vectors to train a long short-term memory recurrent neural network (LSTM-RNN). We directly use the produced tweet vectors as features to classify hashtags without any feature engineering. Experiments on real world data from Twitter to recommend hashtags show that our proposed LSTM-RNN model outperforms state-of-the-art methods and LSTM unit also obtains the best performance compared to standard RNN and gated recurrent unit (GRU).", "title": "" }, { "docid": "ba2632b7a323e785b57328d32a26bc99", "text": "Modern malware is designed with mutation characteristics, namely polymorphism and metamorphism, which causes an enormous growth in the number of variants of malware samples. Categorization of malware samples on the basis of their behaviors is essential for the computer security community, because they receive huge number of malware everyday, and the signature extraction process is usually based on malicious parts characterizing malware families. Microsoft released a malware classification challenge in 2015 with a huge dataset of near 0.5 terabytes of data, containing more than 20K malware samples. The analysis of this dataset inspired the development of a novel paradigm that is effective in categorizing malware variants into their actual family groups. This paradigm is presented and discussed in the present paper, where emphasis has been given to the phases related to the extraction, and selection of a set of novel features for the effective representation of malware samples. Features can be grouped according to different characteristics of malware behavior, and their fusion is performed according to a per-class weighting paradigm. The proposed method achieved a very high accuracy ($\\approx$ 0.998) on the Microsoft Malware Challenge dataset.", "title": "" }, { "docid": "4284e9bbe3bf4c50f9e37455f1118e6b", "text": "A longevity revolution (Butler, 2008) is occurring across the globe. Because of factors ranging from the reduction of early-age mortality to an increase in life expectancy at later ages, most of the world’s population is now living longer than preceding generations (Bengtson, 2014). There are currently more than 44 million older adults—typically defined as persons 65 years and older—living in the United States, and this number is expected to increase to 98 million by 2060 (Administration on Aging, 2016). Although most older adults report higher levels of life satisfaction than do younger or middle-aged adults (George, 2010), between 5.6 and 8 million older Americans have a diagnosable mental health or substance use disorder (Bartels & Naslund, 2013). Furthermore, because of the rapid growth of the older adult population, this figure is expected to nearly double by 2030 (Bartels & Naslund, 2013). Mental health care is effective for older adults, and evidence-based treatments exist to address a broad range of issues, including anxiety disorders, depression, sleep disturbances, substance abuse, and some symptoms of dementia (Myers & Harper, 2004). Counseling interventions may also be beneficial for nonclinical life transitions, such as coping with loss, adjusting to retirement and a reduced income, and becoming a grandparent (Myers & Harper, 2004). Yet, older adults are underserved when it comes to mental", "title": "" }, { "docid": "619b39299531f126769aa96b3e0e84e1", "text": "In this paper, we focus on the opinion target extraction as part of the opinion mining task. We model the problem as an information extraction task, which we address based on Conditional Random Fields (CRF). As a baseline we employ the supervised algorithm by Zhuang et al. (2006), which represents the state-of-the-art on the employed data. We evaluate the algorithms comprehensively on datasets from four different domains annotated with individual opinion target instances on a sentence level. Furthermore, we investigate the performance of our CRF-based approach and the baseline in a singleand cross-domain opinion target extraction setting. Our CRF-based approach improves the performance by 0.077, 0.126, 0.071 and 0.178 regarding F-Measure in the single-domain extraction in the four domains. In the crossdomain setting our approach improves the performance by 0.409, 0.242, 0.294 and 0.343 regarding F-Measure over the baseline.", "title": "" }, { "docid": "5dd91b5a3a09075fe1852e5fecd277b0", "text": "Efficient blood flow depends on two developmental processes that occur within the atrioventricular junction (AVJ) of the heart: conduction delay, which entrains sequential chamber contraction; and valve formation, which prevents retrograde fluid movement. Defects in either result in severe congenital heart disease; however, little is known about the interplay between these two crucial developmental processes. Here, we show that AVJ conduction delay is locally assigned by the morphogenetic events that initiate valve formation. Our data demonstrate that physical separation from endocardial-derived factors prevents AVJ myocardium from becoming fast conducting. Mechanistically, this physical separation is induced by myocardial-derived factors that support cardiac jelly deposition at the onset of valve formation. These data offer a novel paradigm for conduction patterning, whereby reciprocal myocardial-endocardial interactions coordinate the processes of valve formation with establishment of conduction delay. This, in turn, synchronizes the electrophysiological and structural events necessary for the optimization of blood flow through the developing heart.", "title": "" }, { "docid": "4002b79aac3ab479451006b66723b766", "text": "Wearable devices have recently received considerable interest due to their great promise for a plethora of applications. Increased research efforts are oriented towards a non-invasive monitoring of human health as well as activity parameters. A wide range of wearable sensors are being developed for real-time non-invasive monitoring. This paper provides a comprehensive review of sensors used in wrist-wearable devices, methods used for the visualization of parameters measured as well as methods used for intelligent analysis of data obtained from wrist-wearable devices. In line with this, the main features of commercial wrist-wearable devices are presented. As a result of this review, a taxonomy of sensors, functionalities, and methods used in non-invasive wrist-wearable devices was assembled.", "title": "" }, { "docid": "71c31f41d116a51786a4e8ded2c5fb87", "text": "Targeting CTLA-4 represents a new type of immunotherapeutic approach, namely immune checkpoint inhibition. Blockade of CTLA-4 by ipilimumab was the first strategy to achieve a significant clinical benefit for late-stage melanoma patients in two phase 3 trials. These results fueled the notion of immunotherapy being the breakthrough strategy for oncology in 2013. Subsequently, many trials have been set up to test various immune checkpoint modulators in malignancies, not only in melanoma. In this review, recent new ideas about the mechanism of action of CTLA-4 blockade, its current and future therapeutic use, and the intensive search for biomarkers for response will be discussed. Immune checkpoint blockade, targeting CTLA-4 and/or PD-1/PD-L1, is currently the most promising systemic therapeutic approach to achieve long-lasting responses or even cure in many types of cancer, not just in patients with melanoma.", "title": "" }, { "docid": "e244cbd076ea62b4d720378c2adf4438", "text": "This paper introduces flash organizations: crowds structured like organizations to achieve complex and open-ended goals. Microtask workflows, the dominant crowdsourcing structures today, only enable goals that are so simple and modular that their path can be entirely pre-defined. We present a system that organizes crowd workers into computationally-represented structures inspired by those used in organizations - roles, teams, and hierarchies - which support emergent and adaptive coordination toward open-ended goals. Our system introduces two technical contributions: 1) encoding the crowd's division of labor into de-individualized roles, much as movie crews or disaster response teams use roles to support coordination between on-demand workers who have not worked together before; and 2) reconfiguring these structures through a model inspired by version control, enabling continuous adaptation of the work and the division of labor. We report a deployment in which flash organizations successfully carried out open-ended and complex goals previously out of reach for crowdsourcing, including product design, software development, and game production. This research demonstrates digitally networked organizations that flexibly assemble and reassemble themselves from a globally distributed online workforce to accomplish complex work.", "title": "" }, { "docid": "ca23813c7caf031c97ae5c0db447d39d", "text": "Sequence-to-sequence models, such as attention-based models in automatic speech recognition (ASR), are typically trained to optimize the cross-entropy criterion which corresponds to improving the log-likelihood of the data. However, system performance is usually measured in terms of word error rate (WER), not log-likelihood. Traditional ASR systems benefit from discriminative sequence training which optimizes criteria such as the state-level minimum Bayes risk (sMBR) which are more closely related to WER. In the present work, we explore techniques to train attention-based models to directly minimize expected word error rate. We consider two loss functions which approximate the expected number of word errors: either by sampling from the model, or by using N-best lists of decoded hypotheses, which we find to be more effective than the sampling-based method. In experimental evaluations, we find that the proposed training procedure improves performance by up to 8.2% relative to the baseline system. This allows us to train grapheme-based, uni-directional attention-based models which match the performance of a traditional, state-of-the-art, discriminative sequence-trained system on a mobile voice-search task.", "title": "" }, { "docid": "83452d8424d97b1c1f5826d32b8ccbaa", "text": "Creating meaning from a wide variety of available information and being able to choose what to learn are highly relevant skills for learning in a connectivist setting. In this work, various approaches have been utilized to gain insights into learning processes occurring within a network of learners and understand the factors that shape learners' interests and the topics to which learners devote a significant attention. This study combines different methods to develop a scalable analytic approach for a comprehensive analysis of learners' discourse in a connectivist massive open online course (cMOOC). By linking techniques for semantic annotation and graph analysis with a qualitative analysis of learner-generated discourse, we examined how social media platforms (blogs, Twitter, and Facebook) and course recommendations influence content creation and topics discussed within a cMOOC. Our findings indicate that learners tend to focus on several prominent topics that emerge very quickly in the course. They maintain that focus, with some exceptions, throughout the course, regardless of readings suggested by the instructor. Moreover, the topics discussed across different social media differ, which can likely be attributed to the affordances of different media. Finally, our results indicate a relatively low level of cohesion in the topics discussed which might be an indicator of a diversity of the conceptual coverage discussed by the course participants.", "title": "" }, { "docid": "f4bf4be69ea3f3afceca056e2b5b8102", "text": "In this paper we present a conversational dialogue system, Ch2R (Chinese Chatter Robot) for online shopping guide, which allows users to inquire about information of mobile phone in Chinese. The purpose of this paper is to describe our development effort in terms of the underlying human language technologies (HLTs) as well as other system issues. We focus on a mixed-initiative conversation mechanism for interactive shopping guide combining initiative guiding and question understanding. We also present some evaluation on the system in mobile phone shopping guide domain. Evaluation results demonstrate the efficiency of our approach.", "title": "" }, { "docid": "74ca6ae081391e5b56c0d3019214ed99", "text": "We present Adaptive Memory Networks (AMN) that processes input-question pairs to dynamically construct a network architecture optimized for lower inference times for Question Answering (QA) tasks. AMN processes the input story to extract entities and stores them in memory banks. Starting from a single bank, as the number of input entities increases, AMN learns to create new banks as the entropy in a single bank becomes too high. Hence, after processing an input-question(s) pair, the resulting network represents a hierarchical structure where entities are stored in different banks, distanced by question relevance. At inference, one or few banks are used, creating a tradeoff between accuracy and performance. AMN is enabled by dynamic networks that allow input dependent network creation and efficiency in dynamic mini-batching as well as our novel bank controller that allows learning discrete decision making with high accuracy. In our results, we demonstrate that AMN learns to create variable depth networks depending on task complexity and reduces inference times for QA tasks.", "title": "" }, { "docid": "0297af005c837e410272ab3152942f90", "text": "Iris authentication is a popular method where persons are accurately authenticated. During authentication phase the features are extracted which are unique. Iris authentication uses IR images for authentication. This proposed work uses color iris images for authentication. Experiments are performed using ten different color models. This paper is focused on performance evaluation of color models used for color iris authentication. This proposed method is more reliable which cope up with different noises of color iris images. The experiments reveals the best selection of color model used for iris authentication. The proposed method is validated on UBIRIS noisy iris database. The results demonstrate that the accuracy is 92.1%, equal error rate of 0.072 and computational time is 0.039 seconds.", "title": "" }, { "docid": "53d07bc7229500295741491aea15f63a", "text": "Unhealthy lifestyle behaviour is driving an increase in the burden of chronic non-communicable diseases worldwide. Recent evidence suggests that poor diet and a lack of exercise contribute to the genesis and course of depression. While studies examining dietary improvement as a treatment strategy in depression are lacking, epidemiological evidence clearly points to diet quality being of importance to the risk of depression. Exercise has been shown to be an effective treatment strategy for depression, but this is not reflected in treatment guidelines, and increased physical activity is not routinely encouraged when managing depression in clinical practice. Recommendations regarding dietary improvement, increases in physical activity and smoking cessation should be routinely given to patients with depression. Specialised and detailed advice may not be necessary. Recommendations should focus on following national guidelines for healthy eating and physical activity.", "title": "" }, { "docid": "2fe0e5b0b49e886c9f99132f50beeea6", "text": "Practical wearable gesture tracking requires that sensors align with existing ergonomic device forms. We show that combining EMG and pressure data sensed only at the wrist can support accurate classification of hand gestures. A pilot study with unintended EMG electrode pressure variability led to exploration of the approach in greater depth. The EMPress technique senses both finger movements and rotations around the wrist and forearm, covering a wide range of gestures, with an overall 10-fold cross validation classification accuracy of 96%. We show that EMG is especially suited to sensing finger movements, that pressure is suited to sensing wrist and forearm rotations, and their combination is significantly more accurate for a range of gestures than either technique alone. The technique is well suited to existing wearable device forms such as smart watches that are already mounted on the wrist.", "title": "" } ]
scidocsrr
7441e5c76b17cf1f246c3efebf0dd644
PROBLEMS OF EMPLOYABILITY-A STUDY OF JOB – SKILL AND QUALIFICATION MISMATCH
[ { "docid": "8e74a27a3edea7cf0e88317851bc15eb", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "c08e9731b9a1135b7fb52548c5c6f77e", "text": "Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.", "title": "" }, { "docid": "6b1e67c1768f9ec7a6ab95a9369b92d1", "text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.", "title": "" }, { "docid": "9c97a3ea2acfe09e3c60cbcfa35bab7d", "text": "In comparison with document summarization on the articles from social media and newswire, argumentative zoning (AZ) is an important task in scientific paper analysis. Traditional methodology to carry on this task relies on feature engineering from different levels. In this paper, three models of generating sentence vectors for the task of sentence classification were explored and compared. The proposed approach builds sentence representations using learned embeddings based on neural network. The learned word embeddings formed a feature space, to which the examined sentence is mapped to. Those features are input into the classifiers for supervised classification. Using 10-cross-validation scheme, evaluation was conducted on the Argumentative-Zoning (AZ) annotated articles. The results showed that simply averaging the word vectors in a sentence works better than the paragraph to vector algorithm and by integrating specific cuewords into the loss function of the neural network can improve the classification performance. In comparison with the hand-crafted features, the word2vec method won for most of the categories. However, the hand-crafted features showed their strength on classifying some of the categories.", "title": "" }, { "docid": "11e2ec2aab62ba8380e82a18d3fcb3d8", "text": "In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: http://github.com/FerreroJeremy/Cross-Language-Dataset.", "title": "" }, { "docid": "c38c2d8f7c21acc3fcb9b7d9ecc6d2d1", "text": "In this paper we proposed new technique for human identification using fusion of both face and speech which can substantially improve the rate of recognition as compared to the single biometric identification for security system development. The proposed system uses principal component analysis (PCA) as feature extraction techniques which calculate the Eigen vectors and Eigen values. These feature vectors are compared using the similarity measure algorithm like Mahalanobis Distances for the decision making. The Mel-Frequency cestrum coefficients (MFCC) feature extraction techniques are used for speech recognition in our project. Cross correlation coefficients are considered as primary features. The Hidden Markov Model (HMM) is used to calculate the like hoods in the MFCC extracted features to make the decision about the spoken wards.", "title": "" }, { "docid": "c8984cf950244f0d300c6446bcb07826", "text": "The grounded theory approach to doing qualitative research in nursing has become very popular in recent years. I confess to never really having understood Glaser and Strauss' original book: The Discovery of Grounded Theory. Since they wrote it, they have fallen out over what grounded theory might be and both produced their own versions of it. I welcomed, then, Kathy Charmaz's excellent and practical guide.", "title": "" }, { "docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd", "text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "title": "" }, { "docid": "abec336a59db9dd1fdea447c3c0ff3d3", "text": "Neural network training relies on our ability to find “good” minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and wellchosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple “filter normalization” method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.", "title": "" }, { "docid": "8c95392ab3cc23a7aa4f621f474d27ba", "text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.", "title": "" }, { "docid": "2062b94ee661e5e50cbaa1c952043114", "text": "The harsh operating environment of the automotive application makes the semi-permanent connector susceptible to intermittent high contact resistance which eventually leads to failure. Fretting corrosion is often the cause of these failures. However, laboratory testing of sample contact materials produce results that do not correlate with commercially tested connectors. A multicontact (M-C) reliability model is developed to bring together the fundamental studies and studies conducted on commercially available connector terminals. It is based on fundamental studies of the single contact interfaces and applied to commercial multicontact terminals. The model takes into consideration firstly, that a single contact interface may recover to low contact resistance after attaining a high value and secondly, that a terminal consists of more than one contact interface. For the connector to fail, all contact interfaces have to be in the failed state at the same time.", "title": "" }, { "docid": "d8a7ab2abff4c2e5bad845a334420fe6", "text": "Tone-mapping operators (TMOs) are designed to generate perceptually similar low-dynamic-range images from high-dynamic-range ones. We studied the performance of 15 TMOs in two psychophysical experiments where observers compared the digitally generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment, and the setups were designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the question of which TMO is the best, KimKautz [\"Consistent tone reproduction,\" in Proceedings of Computer Graphics and Imaging (2008)] and Krawczyk [\"Lightness perception in tone reproduction for high dynamic range images,\" in Proceedings of Eurographics (2005), p. 3] obtained the better results across the different experiments. We conclude that more thorough and standardized evaluation criteria are needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments.", "title": "" }, { "docid": "d0cdbd1137e9dca85d61b3d90789d030", "text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).", "title": "" }, { "docid": "79425b2b27a8f80d2c4012c76e6eb8f6", "text": "This paper examines previous Technology Acceptance Model (TAM)-related studies in order to provide an expanded model that explains consumers’ acceptance of online purchasing. Our model provides extensions to the original TAM by including constructs such as social influence and voluntariness; it also examines the impact of external variables including trust, privacy, risk, and e-loyalty. We surveyed consumers in the United States and Australia. Our findings suggest that our expanded model serves as a very good predictor of consumers’ online purchasing behaviors. The linear regression model shows a respectable amount of variance explained for Behavioral Intention (R 2 = .627). Suggestions are provided for the practitioner and ideas are presented for future research.", "title": "" }, { "docid": "b591b75b4653c01e3525a0889e7d9b90", "text": "The concept of isogeometric analysis is proposed. Basis functions generated from NURBS (Non-Uniform Rational B-Splines) are employed to construct an exact geometric model. For purposes of analysis, the basis is refined and/or its order elevated without changing the geometry or its parameterization. Analogues of finite element hand p-refinement schemes are presented and a new, more efficient, higher-order concept, k-refinement, is introduced. Refinements are easily implemented and exact geometry is maintained at all levels without the necessity of subsequent communication with a CAD (Computer Aided Design) description. In the context of structural mechanics, it is established that the basis functions are complete with respect to affine transformations, meaning that all rigid body motions and constant strain states are exactly represented. Standard patch tests are likewise satisfied. Numerical examples exhibit optimal rates of convergence for linear elasticity problems and convergence to thin elastic shell solutions. A k-refinement strategy is shown to converge toward monotone solutions for advection–diffusion processes with sharp internal and boundary layers, a very surprising result. It is argued that isogeometric analysis is a viable alternative to standard, polynomial-based, finite element analysis and possesses several advantages. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b7c0864be28d70d49ae4a28fb7d78f04", "text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.", "title": "" }, { "docid": "dc883936f3cc19008983c9a5bb2883f3", "text": "Laparoscopic surgery provides patients with less painful surgery but is more demanding for the surgeon. The increased technological complexity and sometimes poorly adapted equipment have led to increased complaints of surgeon fatigue and discomfort during laparoscopic surgery. Ergonomic integration and suitable laparoscopic operating room environment are essential to improve efficiency, safety, and comfort for the operating team. Understanding ergonomics can not only make life of surgeon comfortable in the operating room but also reduce physical strains on surgeon.", "title": "" }, { "docid": "e9b438cfe853e98f05b661f9149c0408", "text": "Misinformation and fact-checking are opposite forces in the news environment: the former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential \"backfire\" effects.", "title": "" }, { "docid": "cf5829d1bfa1ae243bbf67776b53522d", "text": "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.", "title": "" }, { "docid": "018b25742275dd628c58208e5bd5a532", "text": "Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.", "title": "" }, { "docid": "6ef04225b5f505a48127594a12fef112", "text": "For differential operators of order 2, this paper presents a new method that combines generalized exponents to find those solutions that can be represented in terms of Bessel functions.", "title": "" } ]
scidocsrr
4c5a69311406a5ffdc76b1ed84aa831b
Basic survey on Malware Analysis, Tools and Techniques
[ { "docid": "453af7094a854afd1dfb2e7dc36a7cca", "text": "In this paper, we propose a new approach for the static detection of malicious code in executable programs. Our approach rests on a semantic analysis based on behaviour that even makes possible the detection of unknown malicious code. This analysis is carried out directly on binary code. Static analysis offers techniques for predicting properties of the behaviour of programs without running them. The static analysis of a given binary executable is achieved in three major steps: construction of an intermediate representation, flow-based analysis that catches securityoriented program behaviour, and static verification of critical behaviours against security policies (model checking). 1. Motivation and Background With the advent and the rising popularity of networks, Internet, intranets and distributed systems, security is becoming one of the focal points of research. As a matter of fact, more and more people are concerned with malicious code that could exist in software products. A malicious code is a piece of code that can affect the secrecy, the integrity, the data and control flow, and the functionality of a system. Therefore, ∗This research is jointly funded by a research grant from the Natural Sciences and Engineering Research Council, NSERC, Canada and also by a research contract from the Defence Research Establishment, Valcartier (DREV), 2459, Pie XI Nord, Val-Bélair, QC, Canada, G3J 1X5 their detection is a major concern within the computer science community as well as within the user community. As malicious code can affect the data and control flow of a program, static flow analysis may naturally be helpful as part of the detection process. In this paper, we address the problem of static detection of malicious code in binary executables. The primary objective of this research initiative is to elaborate practical methods and tools with robust theoretical foundations for the static detection of malicious code. The rest of the paper is organized in the following way. Section 2 is devoted to a comparison of static and dynamic approaches. Section 3 presents our approach to the detection of malices in binary executable code. Section 4 discusses the implementation of our approach. Finally, a few remarks and a discussion of future research are ultimately sketched as a conclusion in Section 5. 2. Static vs dynamic analysis There are two main approaches for the detection of malices : static analysis and dynamic analysis. Static analysis consists in examining the code of programs to determine properties of the dynamic execution of these programs without running them. This technique has been used extensively in the past by compiler developers to carry out various analyses and transformations aiming at optimizing the code [10]. Static analysis is also used in reverse engineering of software systems and for program understanding [3, 4]. Its use for the detection of malicious code is fairly recent. Dynamic analysis mainly consists in monitoring the execution of a program to detect malicious behaviour. Static analysis has the following advantages over dynamic analysis: • Static analysis techniques permit to make exhaustive analysis. They are not bound to a specific execution of a program and can give guarantees that apply to all executions of the program. In contrast, dynamic analysis techniques only allow examination of behaviours that correspond to selected test cases. • A verdict can be given before execution, where it may be difficult to determine the proper action to take in the presence of malices. • There is no run-time overhead. However, it may be impossible to certify statically that certain properties hold (e.g., due to undecidability). In this case, dynamic monitoring may be the only solution. Thus, static analysis and dynamic analysis are complementary. Static analysis can be used first, and properties that cannot be asserted statically can be monitored dynamically. As mentioned in the introduction, in this paper, we are concerned with static analysis techniques. Not much has been published about their use for the detection of malicious code. In [8], the authors propose a method for statically detecting malicious code in C programs. Their method is based on so-called tell-tale signs, which are program properties that allow one to distinguish between malicious and benign programs. The authors combine the tell-tale sign approach with program slicing in order to produce small fragments of large programs that can be easily analyzed. 3. Description of the Approach Static analysis techniques are generally used to operate on source code. However, as we explained in the introduction, we need to apply them to binary code, and thus, we had to adapt and evolve these techniques. Our approach is structured in three major steps: Firstly, the binary code is translated into an internal intermediate form (see Section 3.1) ; secondly, this intermediate form is abstracted through flowbased analysis as various relevant graphs (controlflow graph, data-flow graph, call graph, critical-API 1 graph, etc.) (Section 3.2); the third step is the static verification and consists in checking these graphs against security policies (Section 3.3). 3.1 Intermediate Representation A binary executable is the machine code version of a high-level or assembly program that has been compiled (or assembled) and linked for a particular platform and operating system. The general format of binary executables varies widely among operating systems. For example, the Portable Executable format (PE) is used by the Windows NT/98/95 operating system. The PE format includes comprehensive information about the different sections of the program that form the main part of the file, including the following segments: • .text, which contains the code and the entry point of the application, • .data, which contains various type of data, • .idata and .edata, which contain respectively the list of imported and exported APIs for an application or a Dynamic-Linking Library (DLL). The code segment (.text) constitutes the main part of the file; in fact, this section contains all the code that is to be analyzed. In order to translate an executable program into an equivalent high-level-language program, we use the disassembly tool IDA32 Pro [7], which can disassemble various types of executable files (ELF, EXE, PE, etc.) for several processors and operating systems (Windows 98, Windows NT, etc.). Also, IDA32 automatically recognizes calls to the standard libraries (i.e., API calls) for a long list of compilers. Statically analysing a program requires the construction of the syntax tree of this program, also called intermediate representation. The various techniques of static analysis are based on this abstract representation. The goal of the first step is to disassemble the binary code and then to parse the assembly code thus generated to produce the syntax tree (Figure 1). API: Application Program Interface.", "title": "" } ]
[ { "docid": "df175c91322be3a87dfba84793e9b942", "text": "Due to an increasing awareness about dental erosion, many clinicians would like to propose treatments even at the initial stages of the disease. However, when the loss of tooth structure is visible only to the professional eye, and it has not affected the esthetics of the smile, affected patients do not usually accept a full-mouth rehabilitation. Reducing the cost of the therapy, simplifying the clinical steps, and proposing noninvasive adhesive techniques may promote patient acceptance. In this article, the treatment of an ex-bulimic patient is illustrated. A modified approach of the three-step technique was followed. The patient completed the therapy in five short visits, including the initial one. No tooth preparation was required, no anesthesia was delivered, and the overall (clinical and laboratory) costs were kept low. At the end of the treatment, the patient was very satisfied from a biologic and functional point of view.", "title": "" }, { "docid": "1e8466199d3ac46c0005551204d017bf", "text": "Learned local descriptors based on Convolutional Neural Networks (CNNs) have achieved significant improvements on patch-based benchmarks, whereas not having demonstrated strong generalization ability on recent benchmarks of image-based 3D reconstruction. In this paper, we mitigate this limitation by proposing a novel local descriptor learning approach that integrates geometry constraints from multi-view reconstructions, which benefits the learning process in terms of data generation, data sampling and loss computation. We refer to the proposed descriptor as GeoDesc, and demonstrate its superior performance on various large-scale benchmarks, and in particular show its great success on challenging reconstruction tasks. Moreover, we provide guidelines towards practical integration of learned descriptors in Structurefrom-Motion (SfM) pipelines, showing the good trade-off that GeoDesc delivers to 3D reconstruction tasks between accuracy and efficiency.", "title": "" }, { "docid": "82b559efc5d3cd552a6322ff63007825", "text": "OBJECTIVE\nThe purpose was to add to the body of knowledge regarding the impact of interruption on acute care nurses' cognitive workload, total task completion times, nurse frustration, and medication administration error while programming a patient-controlled analgesia (PCA) pump.\n\n\nBACKGROUND\nData support that the severity of medication administration error increases with the number of interruptions, which is especially critical during the administration of high-risk medications. Bar code technology, interruption-free zones, and medication safety vests have been shown to decrease administration-related errors. However, there are few published data regarding the impact of number of interruptions on nurses' clinical performance during PCA programming.\n\n\nMETHOD\nNine acute care nurses completed three PCA pump programming tasks in a simulation laboratory. Programming tasks were completed under three conditions where the number of interruptions varied between two, four, and six. Outcome measures included cognitive workload (six NASA Task Load Index [NASA-TLX] subscales), total task completion time (seconds), nurse frustration (NASA-TLX Subscale 6), and PCA medication administration error (incorrect final programming).\n\n\nRESULTS\nIncreases in the number of interruptions were associated with significant increases in total task completion time ( p = .003). We also found increases in nurses' cognitive workload, nurse frustration, and PCA pump programming errors, but these increases were not statistically significant.\n\n\nAPPLICATIONS\nComplex technology use permeates the acute care nursing practice environment. These results add new knowledge on nurses' clinical performance during PCA pump programming and high-risk medication administration.", "title": "" }, { "docid": "a1d58b3a9628dc99edf53c1112dc99b8", "text": "Multiple criteria decision-making (MCDM) research has developed rapidly and has become a main area of research for dealing with complex decision problems. The purpose of the paper is to explore the performance evaluation model. This paper develops an evaluation model based on the fuzzy analytic hierarchy process and the technique for order performance by similarity to ideal solution, fuzzy TOPSIS, to help the industrial practitioners for the performance evaluation in a fuzzy environment where the vagueness and subjectivity are handled with linguistic values parameterized by triangular fuzzy numbers. The proposed method enables decision analysts to better understand the complete evaluation process and provide a more accurate, effective, and systematic decision support tool. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "933398ff8f74a99bec6ea6e794910a8e", "text": "Cognitive computing is an interdisciplinary research field that simulates human thought processes in a computerized model. One application for cognitive computing is sentiment analysis on online reviews, which reflects opinions and attitudes toward products and services experienced by consumers. A high level of classification performance facilitates decision making for both consumers and firms. However, while much effort has been made to propose advanced classification algorithms to improve the performance, the importance of the textual quality of the data has been ignored. This research explores the impact of two influential textual features, namely the word count and review readability, on the performance of sentiment classification. We apply three representative deep learning techniques, namely SRN, LSTM, and CNN, to sentiment analysis tasks on a benchmark movie reviews dataset. Multiple regression models are further employed for statistical analysis. Our findings show that the dataset with reviews having a short length and high readability could achieve the best performance compared with any other combinations of the levels of word count and readability and that controlling the review length is more effective for garnering a higher level of accuracy than increasing the readability. Based on these findings, a practical application, i.e., a text evaluator or a website plug-in for text evaluation, can be developed to provide a service of review editorials and quality control for crowd-sourced review websites. These findings greatly contribute to generating more valuable reviews with high textual quality to better serve sentiment analysis and decision making.", "title": "" }, { "docid": "fcf46a98f9e77c83e4946bc75fb97849", "text": "Recent work on sequence to sequence translation using Recurrent Neural Networks (RNNs) based on Long Short Term Memory (LSTM) architectures has shown great potential for learning useful representations of sequential data. A oneto-many encoder-decoder(s) scheme allows for a single encoder to provide representations serving multiple purposes. In our case, we present an LSTM encoder network able to produce representations used by two decoders: one that reconstructs, and one that classifies if the training sequence has an associated label. This allows the network to learn representations that are useful for both discriminative and reconstructive tasks at the same time. This paradigm is well suited for semi-supervised learning with sequences and we test our proposed approach on an action recognition task using motion capture (MOCAP) sequences. We find that semi-supervised feature learning can improve state-of-the-art movement classification accuracy on the HDM05 action dataset. Further, we find that even when using only labeled data and a primarily discriminative objective the addition of a reconstructive decoder can serve as a form of regularization that reduces over-fitting and improves test set accuracy.", "title": "" }, { "docid": "9546f8a74577cc1119e48fae0921d3cf", "text": "Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the length of the text. We propose a sequence-to-sequence, purely convolutional and deconvolutional autoencoding framework that is free of the above issue, while also being computationally efficient. The proposed method is simple, easy to implement and can be leveraged as a building block for many applications. We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data.", "title": "" }, { "docid": "80cccd3f325c8bd9e91854a82f39bbbe", "text": "In this paper new fast algorithms for erosion, dilation, propagation and skeletonization are presented. The key principle of the algorithms is to process object contours. A queue is implemented to store the contours in each iteration for the next iteration. The contours can be passed from one operation to another as well. Contour filling and object labelling become available by minor modifications of the basic operations. The time complexity of the algorithms is linear with the number of contour elements to be processed. The algorithms prove to be faster than any other known algorithms..", "title": "" }, { "docid": "89d0ffd0b809acafda10a20bd5f35a77", "text": "Microscopic analysis of erythrocytes in urine is a valuable diagnostic tool for identifying glomerular hematuria. Indicative of glomerular hematuria is the presence of erythrocyte casts and polyand dysmorphic erythrocytes. In contrast, in non-glomerular hematuria, urine sediment erythrocytes are monoand isomorphic, and erythrocyte casts are absent (1, 2) . To date, various variant forms of dysmorphic erythrocyte morphology have been defi ned and classifi ed. They are categorized as: D1, D2, and D3 cells (2) . D1 and D2 cells are also referred to as acanthocytes or G1 cells which are mickey mouse-like cells with membrane protrusions and severe (D1) to mild (D2) loss of cytoplasmic color (2) . D3 cells are doughnut-like or other polyand dysmorphic forms that include discocytes, knizocytes, anulocytes, stomatocytes, codocytes, and schizocytes (2, 3) . The cellular morphology of these cells is observed to have mild cytoplasmic loss, and symmetrical shaped membranes free of protrusions. Echinocytes and pseudo-acanthocytes (bite-cells) are not considered to be dysmorphic erythrocytes. Glomerular hematuria is likely if more than 40 % of erythrocytes are dysmorphic or 5 % are D1-D2 cells and nephrologic work-up should be considered (2) . For over 20 years, manual microscopy has been the prevailing technique for examining dysmorphic erythrocytes in urine sediments when glomerular pathology is suspected (4, 5) . This labor-intensive method requires signifi cant expertise and experience to ensure consistent and accurate analysis. A more immediate and defi nitive automated technique that classifi es dysmorphic erythrocytes at least as good as the manual method would be an invaluable asset in the routine clinical laboratory practice. Therefore, the aim of the study was to investigate the use of the Iris Diagnostics automated iQ200 (Instrumentation Laboratory, Brussels, Belgium) as an automated platform for screening of dysmorphic erythrocytes. The iQ200 has proven to be an effi cient and reliable asset for our urinalysis (5) , but has not been used for the quantifi cation of dysmorphic erythrocytes. In total, 207 urine specimens of patients with suspected glomerular pathology were initially examined using manual phase contrast microscopy by two independent experienced laboratory technicians at a university medical center. The same specimens were re-evaluated using the Iris iQ200 instrument at our facility, which is a teaching hospital. The accuracy of the iQ200 was compared to the results of manual microscopy for detecting dysmorphic erythrocytes. Urine samples were processed within 2 h of voiding. Upon receipt, uncentrifuged urine samples were used for strip analysis using the AutionMax Urine Analyzer (Menarini, Valkenswaard, The Netherlands). For analysis of dysmorphic erythrocytes 20 mL urine was fi xed with CellFIX TM (a formaldehyde containing fi xative solution; BD Biosciences, Breda, The Netherlands) at a dilution of 100:1 (6) . One half of fi xed urine was centrifuged at 500 × g for 10 min and the pellet analyzed by two independent experienced technicians using phase-contrast microscopy. The other half was analyzed by automated urine sediment analyzer using the iQ200. The iQ200 uses a fl ow cell that hydrodynamically orients the particles within the focal plane of a microscopic lens coupled to a 1.3 megapixel CCD digital camera. Each particle image is digitized and sent to the instrument processor. For our study, the instrument ’ s cellrecognition function for classifying erythrocytes was used. Although the iQ200 can easily recognize and classify normal erythrocytes it cannot automatically classify dysmorphic erythrocytes. Instead, two independent and experienced technicians review the images in categories ‘ normal erythrocytes ’ and ‘ unclassifi ed ’ and reclassify dysmorphic erythrocytes to a separate ‘ dysmorphic ’ category. To minimize *Corresponding author: Ayşe Y. Demir, MD, PhD, Department of Clinical Chemistry and Haematology, Meander Medical Center Utrechtseweg 160, 3818 ES Amersfoort, The Netherlands Phone: + 31 33 8504344, Fax: + 31 33 8502035 , E-mail: ay.demir@meandermc.nl Received September 20, 2011; accepted November 15, 2011; previously published online December 7, 2011", "title": "" }, { "docid": "c1fa2b5da311edb241dca83edcf327a4", "text": "The growing amount of web-based attacks poses a severe threat to the security of web applications. Signature-based detection techniques increasingly fail to cope with the variety and complexity of novel attack instances. As a remedy, we introduce a protocol-aware reverse HTTP proxy TokDoc (the token doctor), which intercepts requests and decides on a per-token basis whether a token requires automatic \"healing\". In particular, we propose an intelligent mangling technique, which, based on the decision of previously trained anomaly detectors, replaces suspicious parts in requests by benign data the system has seen in the past. Evaluation of our system in terms of accuracy is performed on two real-world data sets and a large variety of recent attacks. In comparison to state-of-the-art anomaly detectors, TokDoc is not only capable of detecting most attacks, but also significantly outperforms the other methods in terms of false positives. Runtime measurements show that our implementation can be deployed as an inline intrusion prevention system.", "title": "" }, { "docid": "75daf6e2e18e9da98e507ac66ff611fd", "text": "OBJECTIVE\nA major source of information available in electronic health record (EHR) systems are the clinical free text notes documenting patient care. Managing this information is time-consuming for clinicians. Automatic text summarisation could assist clinicians in obtaining an overview of the free text information in ongoing care episodes, as well as in writing final discharge summaries. We present a study of automated text summarisation of clinical notes. It looks to identify which methods are best suited for this task and whether it is possible to automatically evaluate the quality differences of summaries produced by different methods in an efficient and reliable way.\n\n\nMETHODS AND MATERIALS\nThe study is based on material consisting of 66,884 care episodes from EHRs of heart patients admitted to a university hospital in Finland between 2005 and 2009. We present novel extractive text summarisation methods for summarising the free text content of care episodes. Most of these methods rely on word space models constructed using distributional semantic modelling. The summarisation effectiveness is evaluated using an experimental automatic evaluation approach incorporating well-known ROUGE measures. We also developed a manual evaluation scheme to perform a meta-evaluation on the ROUGE measures to see if they reflect the opinions of health care professionals.\n\n\nRESULTS\nThe agreement between the human evaluators is good (ICC=0.74, p<0.001), demonstrating the stability of the proposed manual evaluation method. Furthermore, the correlation between the manual and automated evaluations are high (> 0.90 Spearman's rho). Three of the presented summarisation methods ('Composite', 'Case-Based' and 'Translate') significantly outperform the other methods for all ROUGE measures (p<0.05, Wilcoxon signed-rank test and Bonferroni correction).\n\n\nCONCLUSION\nThe results indicate the feasibility of the automated summarisation of care episodes. Moreover, the high correlation between manual and automated evaluations suggests that the less labour-intensive automated evaluations can be used as a proxy for human evaluations when developing summarisation methods. This is of significant practical value for summarisation method development, because manual evaluation cannot be afforded for every variation of the summarisation methods. Instead, one can resort to automatic evaluation during the method development process.", "title": "" }, { "docid": "764c38722f53229344184248ac94a096", "text": "Verbal fluency tasks have long been used to assess and estimate group and individual differences in executive functioning in both cognitive and neuropsychological research domains. Despite their ubiquity, however, the specific component processes important for success in these tasks have remained elusive. The current work sought to reveal these various components and their respective roles in determining performance in fluency tasks using latent variable analysis. Two types of verbal fluency (semantic and letter) were compared along with several cognitive constructs of interest (working memory capacity, inhibition, vocabulary size, and processing speed) in order to determine which constructs are necessary for performance in these tasks. The results are discussed within the context of a two-stage cyclical search process in which participants first search for higher order categories and then search for specific items within these categories.", "title": "" }, { "docid": "77ff4bd27b795212d355162822fc0cdc", "text": "We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image. There are several technical challenges to this, such as occlusions, lack of calibration data and the scale ambiguity between object size and distance. These have not been addressed in full generality in previous work. Here we propose to tackle these issues by building upon advances in object recognition and using recently created large-scale datasets. We first introduce the task of amodal bounding box completion, which aims to infer the the full extent of the object instances in the image. We then propose a probabilistic framework for learning category-specific object size distributions from available annotations and leverage these in conjunction with amodal completions to infer veridical sizes of objects in novel images. Finally, we introduce a focal length prediction approach that exploits scene recognition to overcome inherent scale ambiguities and demonstrate qualitative results on challenging real-world scenes.", "title": "" }, { "docid": "c8b434b02fbf622ded305ac7e76bcc64", "text": "Marine invertebrate collections have historically been maintained in ethanol following fixation in formalin. These collections may represent rare or extinct species or populations, provide detailed time-series samples, or come from presently inaccessible or difficult-to-sample localities. We tested the viability of obtaining DNA sequence data from formalin-fixed, ethanol-preserved (FFEP) deep-sea crustaceans, and found that nucleotide sequences for mitochondrial 16S rRNA and COI genes can be recovered from FFEP collections of varying age, and that these sequences are unmodified compared with those derived from frozen specimens. These results were repeatable among multiple specimens and collections for several species. Our results indicate that in the absence of fresh or frozen tissues, archived FFEP specimens may prove a useful source of material for analysis of gene sequence data by polymerase chain reaction (PCR) and direct sequencing.", "title": "" }, { "docid": "3faeedfe2473dc837ab0db9eb4aefc4b", "text": "The spacing effect—that is, the benefit of spacing learning events apart rather than massing them together—has been demonstrated in hundreds of experiments, but is not well known to educators or learners. I investigated the spacing effect in the realistic context of flashcard use. Learners often divide flashcards into relatively small stacks, but compared to a large stack, small stacks decrease the spacing between study trials. In three experiments, participants used a web-based study programme to learn GRE-type word pairs. Studying one large stack of flashcards (i.e. spacing) was more effective than studying four smaller stacks of flashcards separately (i.e. massing). Spacing was also more effective than cramming—that is, massing study on the last day before the test. Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "6e17362c0e6a4d3190b3c8b0a11d6844", "text": "A transimpedance amplifier (TIA) has been designed in a 0.35 μm digital CMOS technology for Gigabit Ethernet. It is based on the structure proposed by Mengxiong Li [1]. This paper presents an amplifier which exploits the regulated cascode (RGC) configuration as the input stage with an integrated optical receiver which consists of an integrated photodetector, thus achieving as large effective input transconductance as that of Si Bipolar or GaAs MESFET. The RGC input configuration isolates the input parasitic capacitance including photodiode capacitance from the bandwidth determination better than common-gate TIA. A series inductive peaking is used for enhancing the bandwidth. The proposed TIA has transimpedance gain of 51.56 dBΩ, and 3-dB bandwidth of 6.57 GHz with two inductor between the RGC and source follower for 0.1 pF photodiode capacitance. The proposed TIA has an input courant noise level of about 21.57 pA/Hz0.5 and it consumes DC power of 16 mW from 3.3 V supply voltage.", "title": "" }, { "docid": "77731bed6cf76970e851f3b2ce467c1b", "text": "We introduce SparkGalaxy, a big data processing toolkit that is able to encode complex data science experiments as a set of high-level workflows. SparkGalaxy combines the Spark big data processing platform and the Galaxy workflow management system to o↵er a set of tools for graph processing and machine learning using a novel interaction model for creating and using complex workflows. SparkGalaxy contributes an easy-to-use interface and scalable algorithms for data science. We demonstrate SparkGalaxy use in large social network analysis and other case stud-", "title": "" }, { "docid": "461f422f7705f0b5ef8e8edde989719e", "text": "In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. We demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces.", "title": "" }, { "docid": "32f72bb01626c69aaf7c3464f938c2d4", "text": "The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.", "title": "" }, { "docid": "28a1667c6fe90ad1a1c6838c728a61c8", "text": "When people interpret language, they can reduce the ambiguity of linguistic expressions by using information about perspective: the speaker's, their own, or a shared perspective. In order to investigate the mental processes that underlie such perspective taking, we tracked people's eye movements while they were following instructions to manipulate objects. The eye fixation data in two experiments demonstrate that people do not restrict the search for referents to mutually known objects. Eye movements indicated that addressees considered objects as potential referents even when the speaker could not see those objects, requiring addressees to use mutual knowledge to correct their interpretation. Thus, people occasionally use an egocentric heuristic when they comprehend. We argue that this egocentric heuristic is successful in reducing ambiguity, though it could lead to a systematic error.", "title": "" } ]
scidocsrr
aed133cd60ab3188f2e1eb504208822a
Towards THz Communications-Status in Research , Standardization and Regulation
[ { "docid": "45a098c09a3803271f218fafd4d951cd", "text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.", "title": "" } ]
[ { "docid": "a16e16484e3fca05f97916a48f6a6da5", "text": "A novel integrated magnetic structure suitable for the transformer-linked interleaved boost chopper circuit is proposed in this paper. The coupled inductor is known to be effective for miniaturization in high coupling area because the DC flux in the core can be canceled and the inductor current ripple become to high frequency. However, coupled inductor with E-E core and E-I core are realistically difficult to obtain necessary leakage inductance in high coupling area. The cause is fringing effect and the effects leads to complication of magnetic design. To solve this problem, novel integrated magnetic structure with reduction of fringing flux and high frequency ripple current performance, is proposed. Furthermore, the design method for novel integrated magnetic structure suitable for coupled inductor is proposed from analyzing of the magnetic circuit model. Finally, effectiveness of reduction of fringing flux and design method for novel coupled inductor are discussed from experimental point of view.", "title": "" }, { "docid": "4349d7307567efa5297a4bdd91723336", "text": "Smartphones act as mobile entertainment units where a user can: watch videos, listen to music, update blogs, as well as audio and video blogging. The aim of this study was to review the impact of smartphones on academic performance of students in higher learning institutions. Intensive literature review was done finding out the disadvantages and advantages brought by smartphones in academic arena. In the future, research will be conducted at Ruaha Catholic University to find out whether students are benefiting from using smartphones in their daily studies and whether do they affect their GPA at the end of the year. Keywords— Smartphones, Academic performance, higher learning students, Addictions, GPA, RUCU.", "title": "" }, { "docid": "ab430da4dbaae50c2700f3bb9b1dbde5", "text": "Visual appearance score, appearance mixture type and deformation are three important information sources for human pose estimation. This paper proposes to build a multi-source deep model in order to extract non-linear representation from these different aspects of information sources. With the deep model, the global, high-order human body articulation patterns in these information sources are extracted for pose estimation. The task for estimating body locations and the task for human detection are jointly learned using a unified deep model. The proposed approach can be viewed as a post-processing of pose estimation results and can flexibly integrate with existing methods by taking their information sources as input. By extracting the non-linear representation from multiple information sources, the deep model outperforms state-of-the-art by up to 8.6 percent on three public benchmark datasets.", "title": "" }, { "docid": "fe31348bce3e6e698e26aceb8e99b2d8", "text": "Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually.\n Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience.", "title": "" }, { "docid": "1e21662f93476663e01f721642c16336", "text": "Inspired by the biological concept of central pattern generators (CPGs), this paper deals with adaptive walking control of biped robots. Using CPGs, a trajectory generator is designed consisting of a center-of-gravity (CoG) trajectory generator and a workspace trajectory modulation process. Entraining with feedback information, the CoG generator can generate adaptive CoG trajectories online and workspace trajectories can be modulated in real time based on the generated adaptive CoG trajectories. A motion engine maps trajectories from workspace to joint space. The proposed control strategy is able to generate adaptive joint control signals online to realize biped adaptive walking. The experimental results using a biped platform NAO confirm the effectiveness of the proposed control strategy.", "title": "" }, { "docid": "bfea738332e9802e255881c5592195f2", "text": "This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, large-scale, n -dimensional, dynamical system monitored by a network of N sensors. Local Kalman filters are implemented on nl-dimensional subsystems, nl Lt n, obtained by spatially decomposing the large-scale system. The distributed Kalman filter is optimal under an Lth order Gauss-Markov approximation to the centralized filter. We quantify the information loss due to this Lth-order approximation by the divergence, which decreases as L increases. The order of the approximation L leads to a bound on the dimension of the subsystems, hence, providing a criterion for subsystem selection. The (approximated) centralized Riccati and Lyapunov equations are computed iteratively with only local communication and low-order computation by a distributed iterate collapse inversion (DICI) algorithm. We fuse the observations that are common among the local Kalman filters using bipartite fusion graphs and consensus averaging algorithms. The proposed algorithm achieves full distribution of the Kalman filter. Nowhere in the network, storage, communication, or computation of n-dimensional vectors and matrices is required; only nl Lt n dimensional vectors and matrices are communicated or used in the local computations at the sensors. In other words, knowledge of the state is itself distributed.", "title": "" }, { "docid": "5bcf027ed1a162c88f9083fa48b17ed7", "text": "Customer Relationship Management possess Business Intelligence by incorporating information acquisition, information storage, and decision support functions to provide customized customer service. It enables customer representatives to analyze and classify data to address customer needs in order to promote greater customer satisfaction and retention, but in reality we have learned CRM classification models are outdated, substandard because of noisy and unbalanced data set. In this paper, a new feature selection method is proposed to resolve such CRM data set with relevant features by incorporating an efficient data mining techniques to improve data quality and feature relevancy after preprocessing. Finally it enhances the performance of classification.", "title": "" }, { "docid": "d723903b45554c7a6c2fb4f32aa5dc48", "text": "Harvard architecture CPU design is common in the embedded world. Examples of Harvard-based architecture devices are the Mica family of wireless sensors. Mica motes have limited memory and can process only very small packets. Stack-based buffer overflow techniques that inject code into the stack and then execute it are therefore not applicable. It has been a common belief that code injection is impossible on Harvard architectures. This paper presents a remote code injection attack for Mica sensors. We show how to exploit program vulnerabilities to permanently inject any piece of code into the program memory of an Atmel AVR-based sensor. To our knowledge, this is the first result that presents a code injection technique for such devices. Previous work only succeeded in injecting data or performing transient attacks. Injecting permanent code is more powerful since the attacker can gain full control of the target sensor. We also show that this attack can be used to inject a worm that can propagate through the wireless sensor network and possibly create a sensor botnet. Our attack combines different techniques such as return oriented programming and fake stack injection. We present implementation details and suggest some counter-measures.", "title": "" }, { "docid": "9cc04311cc991af56a69267a5a22aa37", "text": "Adversarial samples are strategically modified samples, which are crafted with the purpose of fooling a classifier at hand. An attacker introduces specially crafted adversarial samples to a deployed classifier, which are being mis-classified by the classifier. However, the samples are perceived to be drawn from entirely different classes and thus it becomes hard to detect the adversarial samples. Most of the prior works have been focused on synthesizing adversarial samples in the image domain. In this paper, we propose a new method of crafting adversarial text samples by modification of the original samples. Modifications of the original text samples are done by deleting or replacing the important or salient words in the text or by introducing new words in the text sample. Our algorithm works best for the datasets which have sub-categories within each of the classes of examples. While crafting adversarial samples, one of the key constraint is to generate meaningful sentences which can at pass off as legitimate from language (English) viewpoint. Experimental results on IMDB movie review dataset for sentiment analysis and Twitter dataset for gender detection show the efficiency of our proposed method.", "title": "" }, { "docid": "0344917c6b44b85946313957a329bc9c", "text": "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm.", "title": "" }, { "docid": "f53885bda1368b5d7b9d14848d3002d2", "text": "This paper presents a method for a reconfigurable magnetic resonance-coupled wireless power transfer (R-MRC-WPT) system in order to achieve higher transmission efficiency under various transmission distance and/or misalignment conditions. Higher efficiency, longer transmission distance, and larger misalignment tolerance can be achieved with the presented R-MRC-WPT system when compared to the conventional four-coil MRC-WPT (C-MRC-WPT) system. The reconfigurability in the R-MRC-WPT system is achieved by adaptively switching between different sizes of drive loops and load loops. All drive loops are in the same plane and all load loops are also in the same plane; this method does not require mechanical movements of the drive loop and load loop and does not result in the system volume increase. Theoretical basis of the method for the R-MRC-WPT system is derived based on a circuit model and an analytical model. Results from a proof-of-concept experimental prototype, with transmitter and receiver coil diameter of 60 cm each, show that the transmission efficiency of the R-MRC-WPT system is higher than the transmission efficiency of the C-MRC-WPT system and the capacitor tuning system for all distances up to 200 cm (~3.3 times the coil diameter) and for all lateral misalignment values within 60 cm (one coil diameter).", "title": "" }, { "docid": "597c3e1762b0eb8558b72963f25d4b27", "text": "Animals are widespread in nature and the analysis of their shape and motion is important in many fields and industries. Modeling 3D animal shape, however, is difficult because the 3D scanning methods used to capture human shape are not applicable to wild animals or natural settings. Consequently, we propose a method to capture the detailed 3D shape of animals from images alone. The articulated and deformable nature of animals makes this problem extremely challenging, particularly in unconstrained environments with moving and uncalibrated cameras. To make this possible, we use a strong prior model of articulated animal shape that we fit to the image data. We then deform the animal shape in a canonical reference pose such that it matches image evidence when articulated and projected into multiple images. Our method extracts significantly more 3D shape detail than previous methods and is able to model new species, including the shape of an extinct animal, using only a few video frames. Additionally, the projected 3D shapes are accurate enough to facilitate the extraction of a realistic texture map from multiple frames.", "title": "" }, { "docid": "fead6ca9612b29697f73cb5e57c0a1cc", "text": "This research examines the effect of online social capital and Internet use on the normally negative effects of technology addiction, especially for individuals prone to self-concealment. Self-concealment is a personality trait that describes individuals who are more likely to withhold personal and private information, inhibiting catharsis and wellbeing. Addiction, in any context, is also typically associated with negative outcomes. However, we investigate the hypothesis that communication technology addiction may positively affect wellbeing for self-concealing individuals when online interaction is positive, builds relationships, or fosters a sense of community. Within these parameters, increased communication through mediated channels (and even addiction) may reverse the otherwise negative effects of self-concealment on wellbeing. Overall, the proposed model offers qualified support for the continued analysis of mediated communication as a potential source for improving the wellbeing for particular individuals. This study is important because we know that healthy communication in relationships, including disclosure, is important to wellbeing. This study recognizes that not all people are comfortable communicating in face-to-face settings. Our findings offer evidence that the presence of computers in human behaviors (e.g., mediated channels of communication and NCTs) enables some individuals to communicate and fos ter beneficial interpersonal relationships, and improve their wellbeing.", "title": "" }, { "docid": "e0ab4e702030ba68c5cb2a49dda02953", "text": "Automatic forecasts of large numbers of univariate time series are often needed in business and other contexts. We describe two automatic forecasting algorithms that have been implemented in the forecast package for R. The first is based on innovations state space models that underly exponential smoothing methods. The second is a step-wise algorithm for forecasting with ARIMA models. The algorithms are applicable to both seasonal and non-seasonal data, and are compared and illustrated using four real time series. We also briefly describe some of the other functionality available in the forecast package.", "title": "" }, { "docid": "b413cd956623afce3d50780ff90b0efe", "text": "Parkinson's disease (PD) is the second most common neurodegenerative disorder. The majority of cases do not arise from purely genetic factors, implicating an important role of environmental factors in disease pathogenesis. Well-established environmental toxins important in PD include pesticides, herbicides, and heavy metals. However, many toxicants linked to PD and used in animal models are rarely encountered. In this context, other factors such as dietary components may represent daily exposures and have gained attention as disease modifiers. Several in vitro, in vivo, and human epidemiological studies have found a variety of dietary factors that modify PD risk. Here, we critically review findings on association between dietary factors, including vitamins, flavonoids, calorie intake, caffeine, alcohol, and metals consumed via food and fatty acids and PD. We have also discussed key data on heterocyclic amines that are produced in high-temperature cooked meat, which is a new emerging field in the assessment of dietary factors in neurological diseases. While more research is clearly needed, significant evidence exists that specific dietary factors can modify PD risk.", "title": "" }, { "docid": "c9d83e00c6ac47f3d3679df3f7338e78", "text": "Due to their widespread popularity, decreasing costs, built-in sensors, computing power and communication capabilities, Android-based personal devices are being seen as an appealing technology for the deployment of wearable fall detection systems. In contrast with previous solutions in the existing literature, which are based on the performance of a single element (a smartphone), this paper proposes and evaluates a fall detection system that benefits from the detection performed by two popular personal devices: a smartphone and a smartwatch (both provided with an embedded accelerometer and a gyroscope). In the proposed architecture, a specific application in each component permanently tracks and analyses the patient's movements. Diverse fall detection algorithms (commonly employed in the literature) were implemented in the developed Android apps to discriminate falls from the conventional activities of daily living of the patient. As a novelty, a fall is only assumed to have occurred if it is simultaneously and independently detected by the two Android devices (which can interact via Bluetooth communication). The system was systematically evaluated in an experimental testbed with actual test subjects simulating a set of falls and conventional movements associated with activities of daily living. The tests were repeated by varying the detection algorithm as well as the pre-defined mobility patterns executed by the subjects (i.e., the typology of the falls and non-fall movements). The proposed system was compared with the cases where only one device (the smartphone or the smartwatch) is considered to recognize and discriminate the falls. The obtained results show that the joint use of the two detection devices clearly increases the system's capability to avoid false alarms or 'false positives' (those conventional movements misidentified as falls) while maintaining the effectiveness of the detection decisions (that is to say, without increasing the ratio of 'false negatives' or actual falls that remain undetected).", "title": "" }, { "docid": "50e96852f585c88f38a6884c43a5a57e", "text": "The IEEE 802.11 Working Group has initiated a new study group known as IEEE 802.11ax which is aiming to devise ways to improve spectrum efficiency, in particular to enhance the system throughput in highly dense scenarios, frequently referred to as Overlapped Basic Service Set (OBSS). In this paper we revisit some of the common problems faced in traditional WiFi networks and show how their effects could be amplified in dense deployments, especially in co-channel scenarios. We then highlight our findings through a simulation based study and draw inferences from these. Some of the key insights from this study are: link suppression and deadlock effects could potentially amplify in co-channel deployments thereby significantly degrading the throughput performance. Also, increasing the concentration of APs in a given area may not always lead to better performance and therefore AP placement needs to be carefully managed in OBSS scenarios. Where AP placement cannot be controlled due to unmanaged environments, findings indicate the need for intelligent load balancing and channel selection algorithms to minimize the impact of the aforementioned effects.", "title": "" }, { "docid": "2cd87e37e2cab1b6db72cbc68af7acb6", "text": "Distinguishing and classifying different types of malware is important to better understanding how they can infect computers and devices, the threat level they pose and how to protect against them. In this paper, a system for classifying malware programs is presented. The paper describes the architecture of the system and assesses its performance on a publicly available database (provided by Microsoft for the Microsoft Malware Classification Challenge BIG2015) to serve as a benchmark for future research efforts. First, the malicious programs are preprocessed such that they are visualized as gray scale images. We then make use of an architecture comprised of multiple layers (multiple levels of encoding) to carry out the classification process of those images/programs. We compare the performance of this approach against traditional machine learning and pattern recognition algorithms. Our experimental results show that the deep learning architecture yields a boost in performance over those conventional/standard algorithms. A hold-out validation analysis using the superior architecture shows an accuracy in the order of 99.15%.", "title": "" }, { "docid": "f83d8a69a4078baf4048b207324e505f", "text": "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.", "title": "" } ]
scidocsrr
235c7f8204b6bcf94d528543fcbb9097
Depth Separation for Neural Networks
[ { "docid": "7d33ba30fd30dce2cd4a3f5558a8c0ba", "text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.", "title": "" }, { "docid": "40b78c5378159e9cdf38275a773b8109", "text": "For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function f is shown to be bounded by $${\\text{O}}\\left( {\\frac{{C_f^2 }}{n}} \\right) + O(\\frac{{ND}}{N}\\log N)$$ where n is the number of nodes, d is the input dimension of the function, N is the number of training observations, and C f is the first absolute moment of the Fourier magnitude distribution of f. The two contributions to this total risk are the approximation error and the estimation error. Approximation error refers to the distance between the target function and the closest neural network function of a given architecture and estimation error refers to the distance between this ideal network function and an estimated network function. With n ~ C f(N/(dlog N))1/2 nodes, the order of the bound on the mean integrated squared error is optimized to be O(C f((d/N)log N)1/2). The bound demonstrates surprisingly favorable properties of network estimation compared to traditional series and nonparametric curve estimation techniques in the case that d is moderately large. Similar bounds are obtained when the number of nodes n is not preselected as a function of C f (which is generally not known a priori), but rather the number of nodes is optimized from the observed data by the use of a complexity regularization or minimum description length criterion. The analysis involves Fourier techniques for the approximation error, metric entropy considerations for the estimation error, and a calculation of the index of resolvability of minimum complexity estimation of the family of networks.", "title": "" }, { "docid": "6efdf43a454ce7da51927c07f1449695", "text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.", "title": "" } ]
[ { "docid": "96d123a5c9a01922ebb99623fddd1863", "text": "Previous studies have shown that Wnt signaling is involved in postnatal mammalian myogenesis; however, the downstream mechanism of Wnt signaling is not fully understood. This study reports that the murine four-and-a-half LIM domain 1 (Fhl1) could be stimulated by β-catenin or LiCl treatment to induce myogenesis. In contrast, knockdown of the Fhl1 gene expression in C2C12 cells led to reduced myotube formation. We also adopted reporter assays to demonstrate that either β-catenin or LiCl significantly activated the Fhl1 promoter, which contains four putative consensus TCF/LEF binding sites. Mutations of two of these sites caused a significant decrease in promoter activity by luciferase reporter assay. Thus, we suggest that Wnt signaling induces muscle cell differentiation, at least partly, through Fhl1 activation.", "title": "" }, { "docid": "05092df698f691d35df8d4bc0008ec8f", "text": "BACKGROUND\nPurpura fulminans is a rare and extremely severe infection, mostly due to Neisseria meningitidis frequently causing early orthopedic lesions. Few studies have reported on the initial surgical management of acute purpura fulminans. The aim of this study is to look at the predictive factors in orthopedic outcome in light of the initial surgical management in children surviving initial resuscitation.\n\n\nMETHODS\nNineteen patients referred to our institution between 1987 and 2005 were taken care of at the very beginning of the purpura fulminans. All cases were retrospectively reviewed so as to collect information on the total skin necrosis, vascular insufficiency, gangrene, and total duration of vasopressive treatment.\n\n\nRESULTS\nAll patients had multiorgan failure; only one never developed any skin necrosis or ischemia. Eighteen patients lost tissue, leading to 22 skin grafts, including two total skin grafts. There was only one graft failure. Thirteen patients were concerned by an amputation, representing, in total, 54 fingers, 36 toes, two transmetatarsal, and ten transtibial below-knee amputations, with a mean delay of 4 weeks after onset of the disease. Necrosis seems to affect mainly the lower limbs, but there is no predictive factor that impacted on the orthopedic outcome. We did not perform any fasciotomy or compartment pressure measurement to avoid non-perfusion worsening; nonetheless, our outcome in this series is comparable to existing series in the literature. V.A.C.(®) therapy could be promising regarding the management of skin necrosis in this particular context. While suffering from general multiorgan failure, great care should be observed not to miss any additional osseous or articular infection, as some patients also develop local osteitis and osteomyelitis that are often not diagnosed.\n\n\nCONCLUSIONS\nWe do not advocate very early surgery during the acute phase of purpura fulminans, as it does not change the orthopedic outcome in these children. By performing amputations and skin coverage some time after the acute phase, we obtained similar results to those found in the literature.", "title": "" }, { "docid": "b7ee04e61d8666b6d865e69e24f69a6f", "text": "CONTEXT\nThis article presents the main results from a large-scale analytical systematic review on knowledge exchange interventions at the organizational and policymaking levels. The review integrated two broad traditions, one roughly focused on the use of social science research results and the other focused on policymaking and lobbying processes.\n\n\nMETHODS\nData collection was done using systematic snowball sampling. First, we used prospective snowballing to identify all documents citing any of a set of thirty-three seminal papers. This process identified 4,102 documents, 102 of which were retained for in-depth analysis. The bibliographies of these 102 documents were merged and used to identify retrospectively all articles cited five times or more and all books cited seven times or more. All together, 205 documents were analyzed. To develop an integrated model, the data were synthesized using an analytical approach.\n\n\nFINDINGS\nThis article developed integrated conceptualizations of the forms of collective knowledge exchange systems, the nature of the knowledge exchanged, and the definition of collective-level use. This literature synthesis is organized around three dimensions of context: level of polarization (politics), cost-sharing equilibrium (economics), and institutionalized structures of communication (social structuring).\n\n\nCONCLUSIONS\nThe model developed here suggests that research is unlikely to provide context-independent evidence for the intrinsic efficacy of knowledge exchange strategies. To design a knowledge exchange intervention to maximize knowledge use, a detailed analysis of the context could use the kind of framework developed here.", "title": "" }, { "docid": "b89f999bd27a6cbe1865f8853e384eba", "text": "A rescue crawler robot with flipper arms has high ability to get over rough terrain, but it is hard to control its flipper arms in remote control. The authors aim at development of a semi-autonomous control system for the solution. In this paper, the authors propose a sensor reflexive method that controls these flippers autonomously for getting over unknown steps. Our proposed method is effective in unknown and changeable environment. The authors applied the proposed method to Aladdin, and examined validity of these control rules in unknown environment.", "title": "" }, { "docid": "e1e836fe6ff690f9c85443d26a1448e3", "text": "■ We describe an apparatus and methodology to support real-time color imaging for night operations. Registered imagery obtained in the visible through nearinfrared band is combined with thermal infrared imagery by using principles of biological opponent-color vision. Visible imagery is obtained with a Gen III image intensifier tube fiber-optically coupled to a conventional charge-coupled device (CCD), and thermal infrared imagery is obtained by using an uncooled thermal imaging array. The two fields of view are matched and imaged through a dichroic beam splitter to produce realistic color renderings of a variety of night scenes. We also demonstrate grayscale and color fusion of intensified-CCD/FLIR imagery. Progress in the development of a low-light-sensitive visible CCD imager with high resolution and wide intrascene dynamic range, operating at thirty frames per second, is described. Example low-light CCD imagery obtained under controlled illumination conditions, from full moon down to overcast starlight, processed by our adaptive dynamic-range algorithm, is shown. The combination of a low-light visible CCD imager and a thermal infrared microbolometer array in a single dualband imager, with a portable image-processing computer implementing our neuralnet algorithms, and color liquid-crystal display, yields a compact integrated version of our system as a solid-state color night-vision device. The systems described here can be applied to a large variety of military operations and civilian needs.", "title": "" }, { "docid": "3419c35e0dff7b47328943235419a409", "text": "Several methods of classification of partially edentulous arches have been proposed and are in use. The most familiar classifications are those originally proposed by Kennedy, Cummer, and Bailyn. None of these classification systems include implants, simply because most of them were proposed before implants became widely accepted. At this time, there is no classification system for partially edentulous arches incorporating implants placed or to be placed in the edentulous spaces for a removable partial denture (RPD). This article proposes a simple classification system for partially edentulous arches with implants based on the Kennedy classification system, with modification, to be used for RPDs. It incorporates the number and positions of implants placed or to be placed in the edentulous areas. A different name, Implant-Corrected Kennedy (ICK) Classification System, is given to the new classification system to be differentiated from other partially edentulous arch classification systems.", "title": "" }, { "docid": "f6f984853e9fa9a77e3f2c473a9a05d8", "text": "Autonomous driving within the pedestrian environment is always challenging, as the perception ability is limited by the crowdedness and the planning process is constrained by the complicated human behaviors. In this paper, we present a vehicle planning system for self-driving with limited perception in the pedestrian environment. Acknowledging the difficulty of obstacle detection and tracking within the crowded pedestrian environment, only the raw LIDAR sensing data is employed for the purpose of traversability analysis and vehicle planning. The designed vehicle planning system has been experimentally validated to be robust and safe within the populated pedestrian environment.", "title": "" }, { "docid": "0e012c89f575d116e94b1f6718c8fe4d", "text": "Tagging is an increasingly important task in natural language processing domains. As there are many natural language processing tasks which can be improved by applying disambiguation to the text, fast and high quality tagging algorithms are a crucial task in information retrieval and question answering. Tagging aims to assigning to each word of a text its correct tag according to the context in which the word is used. Part Of Speech (POS) tagging is a difficult problem by itself, since many words has a number of possible tags associated to it. In this paper we present a novel algorithm that deals with POS-tagging problem based on Harmony Search (HS) optimization method. This paper analyzes the relative advantages of HS metaheuristic approache to the well-known natural language processing problem of POS-tagging. In the experiments we conducted, we applied the proposed algorithm on linguistic corpora and compared the results obtained against other optimization methods such as genetic and simulated annealing algorithms. Experimental results reveal that the proposed algorithm provides more accurate results compared to the other algorithms.", "title": "" }, { "docid": "0506a7f5dddf874487c90025dff0bc7d", "text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.", "title": "" }, { "docid": "e9326cb2e3b79a71d9e99105f0259c5a", "text": "Although drugs are intended to be selective, at least some bind to several physiological targets, explaining side effects and efficacy. Because many drug–target combinations exist, it would be useful to explore possible interactions computationally. Here we compared 3,665 US Food and Drug Administration (FDA)-approved and investigational drugs against hundreds of targets, defining each target by its ligands. Chemical similarities between drugs and ligand sets predicted thousands of unanticipated associations. Thirty were tested experimentally, including the antagonism of the β1 receptor by the transporter inhibitor Prozac, the inhibition of the 5-hydroxytryptamine (5-HT) transporter by the ion channel drug Vadilex, and antagonism of the histamine H4 receptor by the enzyme inhibitor Rescriptor. Overall, 23 new drug–target associations were confirmed, five of which were potent (<100 nM). The physiological relevance of one, the drug N,N-dimethyltryptamine (DMT) on serotonergic receptors, was confirmed in a knockout mouse. The chemical similarity approach is systematic and comprehensive, and may suggest side-effects and new indications for many drugs.", "title": "" }, { "docid": "8f137f55376693eeedb8fc5b1e86518a", "text": "Previous studies have shown that both αA- and αB-crystallins bind Cu2+, suppress the formation of Cu2+-mediated active oxygen species, and protect ascorbic acid from oxidation by Cu2+. αA- and αB-crystallins are small heat shock proteins with molecular chaperone activity. In this study we show that the mini-αA-crystallin, a peptide consisting of residues 71-88 of αA-crystallin, prevents copper-induced oxidation of ascorbic acid. Evaluation of binding of copper to mini-αA-crystallin showed that each molecule of mini-αA-crystallin binds one copper molecule. Isothermal titration calorimetry and nanospray mass spectrometry revealed dissociation constants of 10.72 and 9.9 μM, respectively. 1,1'-Bis(4-anilino)naphthalene-5,5'-disulfonic acid interaction with mini-αA-crystallin was reduced after binding of Cu2+, suggesting that the same amino acids interact with these two ligands. Circular dichroism spectrometry showed that copper binding to mini-αA-crystallin peptide affects its secondary structure. Substitution of the His residue in mini-αA-crystallin with Ala abolished the redox-suppression activity of the peptide. During the Cu2+-induced ascorbic acid oxidation assay, a deletion mutant, αAΔ70-77, showed about 75% loss of ascorbic acid protection compared to the wild-type αA-crystallin. This difference indicates that the 70-77 region is the primary Cu2+-binding site(s) in human native full-size αA-crystallin. The role of the chaperone site in Cu2+ binding in native αA-crystallin was confirmed by the significant loss of chaperone activity by the peptide after Cu2+ binding.", "title": "" }, { "docid": "565efa7a51438990b3d8da6222dca407", "text": "The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.", "title": "" }, { "docid": "28c3e990b40b62069010e0a7f94adb11", "text": "Steep sub-threshold transistors are promising candidates to replace the traditional MOSFETs for sub-threshold leakage reduction. In this paper, we explore the use of Inter-Band Tunnel Field Effect Transistors (TFETs) in SRAMs at ultra low supply voltages. The uni-directional current conducting TFETs limit the viability of 6T SRAM cells. To overcome this limitation, 7T SRAM designs were proposed earlier at the cost of extra silicon area. In this paper, we propose a novel 6T SRAM design using Si-TFETs for reliable operation with low leakage at ultra low voltages. We also demonstrate that a functional 6T TFET SRAM design with comparable stability margins and faster performances at low voltages can be realized using proposed design when compared with the 7T TFET SRAM cell. We achieve a leakage reduction improvement of 700X and 1600X over traditional CMOS SRAM designs at VDD of 0.3V and 0.5V respectively which makes it suitable for use at ultra-low power applications.", "title": "" }, { "docid": "ec4b7c50f3277bb107961c9953fe3fc4", "text": "A blockchain is a linked-list of immutable tamper-proof blocks, which is stored at each participating node. Each block records a set of transactions and the associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto (Satoshi 2008), as a peer-to-peer money exchange system. Nakamoto referred to the transactional tokens exchanged among clients in his system, as Bitcoins. Overview", "title": "" }, { "docid": "55a29653163bdf9599bf595154a99a25", "text": "The effect of the steel slag aggregate aging on mechanical properties of the high performance concrete is analysed in the paper. The effect of different aging periods of steel slag aggregate on mechanical properties of high performance concrete is studied. It was observed that properties of this concrete are affected by the steel slag aggregate aging process. The compressive strength increases with an increase in the aging period of steel slag aggregate. The flexural strength, Young’s modulus, and impact strength of concrete, increase at the rate similar to that of the compressive strength. The workability and the abrasion loss of concrete decrease with an increase of the steel slag aggregate aging period.", "title": "" }, { "docid": "aff504d1c2149d13718595fd3e745eb0", "text": "Figure 1 illustrates a typical example of a prediction problem: given some noisy observations of a dependent variable at certain values of the independent variable , what is our best estimate of the dependent variable at a new value, ? If we expect the underlying function to be linear, and can make some assumptions about the input data, we might use a least-squares method to fit a straight line (linear regression). Moreover, if we suspect may also be quadratic, cubic, or even nonpolynomial, we can use the principles of model selection to choose among the various possibilities. Gaussian process regression (GPR) is an even finer approach than this. Rather than claiming relates to some specific models (e.g. ), a Gaussian process can represent obliquely, but rigorously, by letting the data ‘speak’ more clearly for themselves. GPR is still a form of supervised learning, but the training data are harnessed in a subtler way. As such, GPR is a less ‘parametric’ tool. However, it’s not completely free-form, and if we’re unwilling to make even basic assumptions about , then more general techniques should be considered, including those underpinned by the principle of maximum entropy; Chapter 6 of Sivia and Skilling (2006) offers an introduction.", "title": "" }, { "docid": "a1d96f46cd4fa625da9e1bf2f6299c81", "text": "The availability of increasingly higher power commercial microwave monolithic integrated circuit (MMIC) amplifiers enables the construction of solid state amplifiers achieving output powers and performance previously achievable only from traveling wave tube amplifiers (TWTAs). A high efficiency power amplifier incorporating an antipodal finline antenna array within a coaxial waveguide is investigated at Ka Band. The coaxial waveguide combiner structure is used to demonstrate a 120 Watt power amplifier from 27 to 31GHz by combining quantity (16), 10 Watt GaN MMIC devices; achieving typical PAE of 25% for the overall power amplifier assembly.", "title": "" }, { "docid": "fb58d6fe77092be4bce5dd0926c563de", "text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.", "title": "" }, { "docid": "d41cd48a377afa6b95598d2df6a27b08", "text": "Graph-based approaches have been most successful in semisupervised learning. In this paper, we focus on label propagation in graph-based semisupervised learning. One essential point of label propagation is that the performance is heavily affected by incorporating underlying manifold of given data into the input graph. The other more important point is that in many recent real-world applications, the same instances are represented by multiple heterogeneous data sources. A key challenge under this setting is to integrate different data representations automatically to achieve better predictive performance. In this paper, we address the issue of obtaining the optimal linear combination of multiple different graphs under the label propagation setting. For this problem, we propose a new formulation with the sparsity (in coefficients of graph combination) property which cannot be rightly achieved by any other existing methods. This unique feature provides two important advantages: 1) the improvement of prediction performance by eliminating irrelevant or noisy graphs and 2) the interpretability of results, i.e., easily identifying informative graphs on classification. We propose efficient optimization algorithms for the proposed approach, by which clear interpretations of the mechanism for sparsity is provided. Through various synthetic and two real-world data sets, we empirically demonstrate the advantages of our proposed approach not only in prediction performance but also in graph selection ability.", "title": "" }, { "docid": "7bc2bacc409341415c8ac9ca3c617c9b", "text": "Many tasks in artificial intelligence require the collaboration of multiple agents. We exam deep reinforcement learning for multi-agent domains. Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents. In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework. Such a hierarchical structure naturally leverages advantages from one another. The idea of combining both perspectives is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning. With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetic experiments and when applied to challenging StarCraft1 micromanagement tasks.", "title": "" } ]
scidocsrr
4dca50f6d5bc4415435c5f3a3ec3c090
Neuromorphic Computing Based on Emerging Memory Technologies
[ { "docid": "a208187fc81a633ac9332ee11567b1a7", "text": "Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin-Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.", "title": "" }, { "docid": "be283056a8db3ab5b2481f3dc1f6526d", "text": "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.", "title": "" }, { "docid": "b720fd664323a19225f0ec2966e175e2", "text": "\"Memory\" is an essential building block in learning and decision-making in biological systems. Unlike modern semiconductor memory devices, needless to say, human memory is by no means eternal. Yet, forgetfulness is not always a disadvantage since it releases memory storage for more important or more frequently accessed pieces of information and is thought to be necessary for individuals to adapt to new environments. Eventually, only memories that are of significance are transformed from short-term memory into long-term memory through repeated stimulation. In this study, we show experimentally that the retention loss in a nanoscale memristor device bears striking resemblance to memory loss in biological systems. By stimulating the memristor with repeated voltage pulses, we observe an effect analogous to memory transition in biological systems with much improved retention time accompanied by additional structural changes in the memristor. We verify that not only the shape or the total number of stimuli is influential, but also the time interval between stimulation pulses (i.e., the stimulation rate) plays a crucial role in determining the effectiveness of the transition. The memory enhancement and transition of the memristor device was explained from the microscopic picture of impurity redistribution and can be qualitatively described by the same equations governing biological memories.", "title": "" } ]
[ { "docid": "a068988ab0492dd617321c01a07b38ad", "text": "Human activity recognition is a key task of many Internet of Things (IoT) applications to understand underlying contexts and react with the environments. Machine learning is widely exploited to identify the activities from sensor measurements, however, they are often overcomplex to run on less-powerful IoT devices. In this paper, we present an alternative approach to efficiently support the activity recognition tasks using brain-inspired hyperdimensional (HD) computing. We show how the HD computing method can be applied to the recognition problem in IoT systems while improving the accuracy and efficiency. In our evaluation conducted for three practical datasets, the proposed design achieves the speedup of the model training by up to 486x as compared to the state-of-the-art neural network training. In addition, our design improves the performance of the HD-based inference procedure by 7x on a low-power ARM processor.", "title": "" }, { "docid": "2b3c9f9c2c44d1b532f15e00e3853671", "text": "Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep convolutional neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progresses have been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transfered/compact convolutional filters and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic networks and stochastic depths networks. After that, we survey the evaluation matrix, main datasets used for evaluating the model performance and recent benchmarking efforts. Finally we conclude this paper, discuss remaining challenges and possible directions in this topic.", "title": "" }, { "docid": "798cd7ebdd234cb62b32d963fdb51af0", "text": "The use of frontal sinus radiographs in positive identification has become an increasingly applied and accepted technique among forensic anthropologists, radiologists, and pathologists. From an evidentiary standpoint, however, it is important to know whether frontal sinus radiographs are a reliable method for confirming or rejecting an identification, and standardized methods should be applied when making comparisons. The purpose of the following study is to develop an objective, standardized comparison method, and investigate the reliability of that method. Elliptic Fourier analysis (EFA) was used to assess the variation in 808 outlines of frontal sinuses by calculating likelihood ratios and posterior probabilities from EFA coefficients. Results show that using EFA coefficient comparison to estimate the probability of a correct identification is a reliable technique, and EFA comparison of frontal sinus outlines is recommended when it may be necessary to provide quantitative substantiation for a forensic identification based on these structures.", "title": "" }, { "docid": "b73faefcb1a9abbf10b49f6d9e7cc360", "text": "Conditional Batch Normalization (CBN) has proved to be an effective tool for visual question answering. However, previous CBN approaches fuse the linguistic information into image features via a simple affine transformation, thus they have struggled on compositional reasoning and object counting in images. In this paper, we propose a novel CBN method using the Kronecker transformation, termed as Conditional Kronecker Batch Normalization (CKBN). CKBN layer facilitates the explicit and expressive learning of compositional reasoning and robust counting in original images. Besides, we demonstrate that the Kronecker transformation in CKBN layer is a generalization of the affine transformation in prior CBN approaches. It could accelerate the fusion of visual and linguistic information, and thus the convergence of overall model. Experiment results show that our model significantly outperforms previous CBN methods (e.g. FiLM) in compositional reasoning, counting as well as the convergence speed on CLEVR dataset.", "title": "" }, { "docid": "c8ec9829957991bfacc4f9faaf0566b9", "text": "Cross lingual projection of linguistic annotation suffers from many sources of bias and noise, leading to unreliable annotations that cannot be used directly. In this paper, we introduce a novel approach to sequence tagging that learns to correct the errors from cross-lingual projection using an explicit debiasing layer. This is framed as joint learning over two corpora, one tagged with gold standard and the other with projected tags. We evaluated with only 1,000 tokens tagged with gold standard tags, along with more plentiful parallel data. Our system equals or exceeds the state-of-the-art on eight simulated lowresource settings, as well as two real lowresource languages, Malagasy and Kinyarwanda.", "title": "" }, { "docid": "fb048df280c08a4d80eb18bafb36e6c7", "text": "There are very few reported cases of traumatic amputation of the male genitalia due to animal bite. The management involves thorough washout of the wounds, debridement, antibiotic prophylaxis, tetanus and rabies immunization followed by immediate reconstruction or primary wound closure with delayed reconstruction, when immediate reconstruction is not feasible. When immediate reconstruction is not feasible, long-term good functional and cosmetic results are still possible in the majority of cases by performing total phallic reconstruction. In particular, it is now possible to fashion a cosmetically acceptable sensate phallus with incorporated neourethra, to allow the patient to void while standing and to ejaculate, and with enough bulk to allow the insertion of a penile prosthesis to guarantee the rigidity necessary to engage in penetrative sexual intercourse.", "title": "" }, { "docid": "938395ce421e0fede708e3b4ab7185b5", "text": "This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.", "title": "" }, { "docid": "8100d99d28be7e5ee32a03e34ce3cd14", "text": "Music artists have composed pieces that are both creative and precise. For example, classical music is well-known for its meticulous structure and emotional effect. Recurrent Neural Networks (RNNs) are powerful models that have achieved excellent performance on difficult learning tasks having temporal dependencies. We propose generative RNN models that create sheet music with well-formed structure and stylistic conventions without predefining music composition rules to the models. We see that Character RNNs are able to learn some patterns but not create structurally accurate music, with a test accuracy of 60% and fooling only upto 35% of the human listeners to believe that the music was created by a human. Generative Adversarial Networks (GANs) were tried, with various training techniques, but produced no meaningful results due to instable training. On the other hand, Seq2Seq models do very well in producing both structurally correct and human-pleasing music, with a test accuracy of 65% and some of its generated music fooling ⇠ 70% of the human listeners.", "title": "" }, { "docid": "7681a78f2d240afc6b2e48affa0612c1", "text": "Web usage mining applies data mining procedures to analyze user access of Web sites. As with any KDD (knowledge discovery and data mining) process, WUM contains three main steps: preprocessing, knowledge extraction, and results analysis. We focus on data preprocessing, a fastidious, complex process. Analysts aim to determine the exact list of users who accessed the Web site and to reconstitute user sessions-the sequence of actions each user performed on the Web site. Intersites WUM deals with Web server logs from several Web sites, generally belonging to the same organization. Thus, analysts must reassemble the users' path through all the different Web servers that they visited. Our solution is to join all the log files and reconstitute the visit. Classical data preprocessing involves three steps: data fusion, data cleaning, and data structuration. Our solution for WUM adds what we call advanced data preprocessing. This consists of a data summarization step, which will allow the analyst to select only the information of interest. We've successfully tested our solution in an experiment with log files from INRIA Web sites.", "title": "" }, { "docid": "54b11a906e212a34320d6bbed2cac0fc", "text": "PURPOSE\nThis study aimed to compare strategies for assessing nutritional adequacy in the dietary intake of elite female athletes.\n\n\nMETHODS\nDietary intake was assessed using an adapted food-frequency questionnaire in 72 elite female athletes from a variety of sports. Nutritional adequacy was evaluated and compared using mean intake; the proportion of participants with intakes below Australian nutrient reference values (NRV), U.S. military dietary reference intakes (MDRI), and current sports nutrition recommendations; and probability estimates of nutrient inadequacy.\n\n\nRESULTS\nMean energy intake was 10,551 +/- 3,836 kJ/day with macronutrient distribution 18% protein, 31% fat, and 46% carbohydrate, consistent with Australian acceptable macronutrient distribution ranges. Mean protein intake (1.6 g . kg(-1) . d(-1)) was consistent with (>1.2 g . kg(-1) . d(-1)), and carbohydrate intake (4.5 g . kg(-1) . d(-1)), below, current sports nutrition recommendations (>5 g . kg(-1) . d(-1)), with 30% and 65% of individuals not meeting these levels, respectively. Mean micronutrient intake met the relevant NRV and MDRI except for vitamin D and folate. A proportion of participants failed to meet the estimated average requirement for folate (48%), calcium (24%), magnesium (19%), and iron (4%). Probability estimates of inadequacy identified intake of folate (44%), calcium (22%), iron (19%), and magnesium (15%) as inadequate.\n\n\nCONCLUSION\nInterpretation of dietary adequacy is complex and varies depending on whether the mean, proportion of participants below the relevant NRV, or statistical probability estimate of inadequacy is used. Further research on methods to determine dietary adequacy in athlete populations is required.", "title": "" }, { "docid": "3ce69e8f46fac6029c506445b4e7634e", "text": "Resumen. En este art́ıculo se presenta el desarrollo de un sistema de reconocimiento de emociones basado en la voz. Se consideraron las siguientes emociones básicas: Enojo, Felicidad, Neutro y Tristeza. Para este propósito una base de datos de voz emocional fue creada con ocho usuarios Mexicanos con 640 frases (8 usuarios × 4 emociones × 20 frases por emoción). Los Modelos Ocultos de Markov (Hidden Markov Models, HMMs) fueron usados para construir el sistema de reconocimiento. Basado en el concepto de modelado acústico de vocales espećıficas emotivas un total de 20 fonemas de vocales (5 vocales × 4 emociones) y 22 fonemas de consonantes fueron considerados para el entrenamiento de los HMMs. Un Algoritmo Genético (Genetic Algorithm, GA) fue integrado dentro del proceso de reconocimiento para encontrar la arquitectura más adecuada para el HMM para cada vocal espećıfica emotiva. Una tasa de reconocimiento total aproximada del 90.00 % fue conseguida con el reconocedor de voz construido con los HMMs optimizados.", "title": "" }, { "docid": "e011ab57139a9a2f6dc13033b0ab6223", "text": "Over the last few years, virtual reality (VR) has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate immersive experiences at low cost. One of the remaining challenges to further improve visual comfort in VR experiences is the vergence-accommodation conflict inherent to all stereoscopic displays. Accurate reproduction of all depth cues is crucial for visual comfort. By combining well-known stereoscopic display principles with emerging factored light field technology, we present the first wearable VR display supporting high image resolution as well as focus cues. A light field is presented to each eye, which provides more natural viewing experiences than conventional near-eye displays. Since the eye box is just slightly larger than the pupil size, rank-1 light field factorizations are sufficient to produce correct or nearly-correct focus cues; no time-multiplexed image display or gaze tracking is required. We analyze lens distortions in 4D light field space and correct them using the afforded high-dimensional image formation. We also demonstrate significant improvements in resolution and retinal blur quality over related near-eye displays. Finally, we analyze diffraction limits of these types of displays.", "title": "" }, { "docid": "519938ce62ec33e6c352e602b6db70f0", "text": "The simplicity of programming the CRISPR (clustered regularly interspaced short palindromic repeats)–associated nuclease Cas9 to modify specific genomic loci suggests a new way to interrogate gene function on a genome-wide scale. We show that lentiviral delivery of a genome-scale CRISPR-Cas9 knockout (GeCKO) library targeting 18,080 genes with 64,751 unique guide sequences enables both negative and positive selection screening in human cells. First, we used the GeCKO library to identify genes essential for cell viability in cancer and pluripotent stem cells. Next, in a melanoma model, we screened for genes whose loss is involved in resistance to vemurafenib, a therapeutic RAF inhibitor. Our highest-ranking candidates include previously validated genes NF1 and MED12, as well as novel hits NF2, CUL3, TADA2B, and TADA1. We observe a high level of consistency between independent guide RNAs targeting the same gene and a high rate of hit confirmation, demonstrating the promise of genome-scale screening with Cas9.", "title": "" }, { "docid": "d4ca93d0aeabda1b90bb3f0f16df9ee8", "text": "Smart card technology has evolved over the last few years following notable improvements in the underlying hardware and software platforms. Advanced smart card microprocessors, along with robust smart card operating systems and platforms, contribute towards a broader acceptance of the technology. These improvements have eliminated some of the traditional smart card security concerns. However, researchers and hackers are constantly looking for new issues and vulnerabilities. In this article we provide a brief overview of the main smart card attack categories and their corresponding countermeasures. We also provide examples of well-documented attacks on systems that use smart card technology (e.g. satellite TV, EMV, proximity identification) in an attempt to highlight the importance of the security of the overall system rather than just the smart card. a 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c166a5ac33c4bf0ffe055578f016e72f", "text": "The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).", "title": "" }, { "docid": "3d02737fa76e85619716a9dc7136248a", "text": "Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources &#x2013; labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO image-caption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects.", "title": "" }, { "docid": "ebc1e12f85c6b03de14b1170f450d3f8", "text": "Mobility disability is becoming prevalent in the obese older population (> or = 60 years of age). We included a total of 13 cross-sectional and 15 longitudinal studies based on actual physical assessments of mobility in the obese older population in this review. We systematically examined existing evidence of which adiposity estimate best predicted mobility disability. Cross-sectional studies (82-4000 participants) showed poorer lower extremity mobility with increasing obesity severity in both men and women. All longitudinal studies (1-22 years) except for one, reported relationships between adiposity and declining mobility. While different physical tests made interpretation challenging, a consistent finding was that walking, stair climbing and chair rise ability were compromised with obesity, especially if the body mass index (BMI) exceeded 35 kg m(-2). More studies found that obese women were at an increased risk for mobility impairment than men. Existing evidence suggests that BMI and waist circumference are emerging as the more consistent predictors of the onset or worsening of mobility disability. Limited interventional evidence shows that weight loss is related with increased mobility and lower extremity function. Additional longitudinal studies are warranted that address overall body composition fat and muscle mass or change on future disability.", "title": "" }, { "docid": "565a6f620f9ccd33b6faa5a7f37df188", "text": "Fog computing (FC) and Internet of Everything (IoE) are two emerging technological paradigms that, to date, have been considered standing-alone. However, because of their complementary features, we expect that their integration can foster a number of computing and network-intensive pervasive applications under the incoming realm of the future Internet. Motivated by this consideration, the goal of this position paper is fivefold. First, we review the technological attributes and platforms proposed in the current literature for the standing-alone FC and IoE paradigms. Second, by leveraging some use cases as illustrative examples, we point out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming, while introducing new open issues. Third, we propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, that integrates FC and IoE and then we detail the main building blocks and services of the corresponding technological platform and protocol stack. Fourth, as a proof-of-concept, we present the simulated energy-delay performance of a small-scale FoE prototype, namely, the V-FoE prototype. Afterward, we compare the obtained performance with the corresponding one of a benchmark technological platform, e.g., the V-D2D one. It exploits only device-to-device links to establish inter-thing “ad hoc” communication. Last, we point out the position of the proposed FoE paradigm over a spectrum of seemingly related recent research projects.", "title": "" }, { "docid": "ce6d7185031f1b205181298909e8a020", "text": "BACKGROUND\nMost preschoolers with viral wheezing exacerbations are not atopic.\n\n\nAIM\nTo test in a prospective controlled trial whether wheezing preschoolers presenting to the ED are different from the above in three different domains defining asthma: the atopic characteristics based on stringent asthma predictive index (S-API), the characteristics of bronchial hyper-responsiveness (BHR), and airway inflammation.\n\n\nMETHODS\nThe S-API was prospectively collected in 41 preschoolers (age 31.9 ± 17.4 months, range; 1-6 years) presenting to the ED with acute wheezing and compared to healthy preschoolers (n = 109) from our community (community control group). Thirty out of the 41 recruited preschoolers performed two sets of bronchial challenge tests (BCT)-(methacholine and adenosine) within 3 weeks and following 3 months of the acute event and compared to 30 consecutive ambulatory preschoolers, who performed BCT for diagnostic workup in our laboratory (ambulatory control group). On presentation, induced sputum (IS) was obtained from 22 of the 41 children.\n\n\nOUTCOMES\nPrimary: S-API, secondary: BCTs characteristics and percent eosinophils in IS.\n\n\nRESULTS\nSignificantly more wheezing preschoolers were S-API positive compared with the community control group: 20/41 (48.7%) versus 15/109 (13.7%, P < 0.001). All methacholine-BCTs-30/30 (100%) were positive compared with 13/14 (92.8%) in the ambulatory control group (P = 0.32). However, 23/27 (85.2%) were adenosine-BCT positive versus 3/17 (17.5%) in the ambulatory control group (P < 0.001). Diagnostic IS success rate was 18/22 (81.8%). Unexpectedly, 9/18 (50.0%) showed eosinophilia in the IS.\n\n\nCONCLUSIONS\nWheezing preschoolers presenting to the ED is a unique population with significantly higher rate of positive S-API and adenosine-BCT compared with controls and frequently (50%) express eosinophilic airway inflammation.", "title": "" }, { "docid": "f1e646a0627a5c61a0f73a41d35ccac7", "text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.", "title": "" } ]
scidocsrr
92a2784f998c9ccf7ff30d4b2a9ae296
Conception, development and implementation of an e-Government maturity model in public agencies
[ { "docid": "82fa51c143159f2b85f9d2e5b610e30d", "text": "Strategies are systematic and long-term approaches to problems. Federal, state, and local governments are investing in the development of strategies to further their e-government goals. These strategies are based on their knowledge of the field and the relevant resources available to them. Governments are communicating these strategies to practitioners through the use of practical guides. The guides provide direction to practitioners as they consider, make a case for, and implement IT initiatives. This article presents an analysis of a selected set of resources government practitioners use to guide their e-government efforts. A selected review of current literature on the challenges to information technology initiatives is used to create a framework for the analysis. A gap analysis examines the extent to which IT-related research is reflected in the practical guides. The resulting analysis is used to identify a set of commonalities across the practical guides and a set of recommendations for future development of practitioner guides and future research into e-government initiatives. D 2005 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "9686ae3ca715c325e616c001b445531b", "text": "IA-32 Execution Layer (IA-32 EL) is a newtechnology that executes IA-32 applications onIntel® Itanium® processor family systems.Currently, support for IA-32 applications onItanium-based platforms is achieved usinghardware circuitry on the Itanium processors.This capability will be enhanced with IA-32EL-software that will ship with Itanium-basedoperating systems and will convert IA-32instructions into Itanium instructions viadynamic translation.In this paper, we describeaspects of the IA-32 Execution Layertechnology, including the general two-phasetranslation architecture and the usage of asingle translator for multiple operatingsystems.The paper provides details of someof the technical challenges such as preciseexception, emulation of FP, MMXTM, and Intel®Streaming SIMD Extension instructions, andmisalignment handling.Finally, the paperpresents some performance results.", "title": "" }, { "docid": "1f4ccaef3ff81f9680b152a3e7b3d178", "text": "We propose a method for forecasting high-dimensional data (hundreds of attributes, trillions of attribute combinations) for a duration of several months. Our motivating application is guaranteed display advertising, a multi-billion dollar industry, whereby advertisers can buy targeted (high-dimensional) user visits from publishers many months or even years in advance. Forecasting high-dimensional data is challenging because of the many possible attribute combinations that need to be forecast. To address this issue, we propose a method whereby only a sub-set of attribute combinations are explicitly forecast and stored, while the other combinations are dynamically forecast on-the-fly using high-dimensional attribute correlation models. We evaluate various attribute correlation models, from simple models that assume the independence of attributes to more sophisticated sample-based models that fully capture the correlations in a high-dimensional space. Our evaluation using real-world display advertising data sets shows that fully capturing high-dimensional correlations leads to significant forecast accuracy gains. A variant of the proposed method has been implemented in the context of Yahoo!'s guaranteed display advertising system.", "title": "" }, { "docid": "36b4c028bcd92115107cf245c1e005c8", "text": "CAPTCHA is now almost a standard security technology, and has found widespread application in commercial websites. Usability and robustness are two fundamental issues with CAPTCHA, and they often interconnect with each other. This paper discusses usability issues that should be considered and addressed in the design of CAPTCHAs. Some of these issues are intuitive, but some others have subtle implications for robustness (or security). A simple but novel framework for examining CAPTCHA usability is also proposed.", "title": "" }, { "docid": "ab2f1f27b11a5a41ff6b2b79bc044c2f", "text": "ABSTACT: Trajectory tracking has been an extremely active research area in robotics in the past decade.In this paper, a kinematic model of two wheel mobile robot for reference trajectory tracking is analyzed and simulated. For controlling the wheeled mobile robot PID controllers are used. For finding the optimal parameters of PID controllers, in this work particle swarm optimization (PSO) is used. The proposed methodology is shown to be a successful solutionfor solving the problem.", "title": "" }, { "docid": "9ca90172c5beff5922b4f5274ef61480", "text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.", "title": "" }, { "docid": "306136e7ffd6b1839956d9f712afbda2", "text": "Dynamic scheduling cloud resources according to the change of the load are key to improve cloud computing on-demand service capabilities. This paper proposes a load-adaptive cloud resource scheduling model based on ant colony algorithm. By real-time monitoring virtual machine of performance parameters, once judging overload, it schedules fast cloud resources using ant colony algorithm to bear some load on the load-free node. So that it can meet changing load requirements. By analyzing an example result, the model can meet the goals and requirements of self-adaptive cloud resources scheduling and improve the efficiency of the resource utilization.", "title": "" }, { "docid": "99bf50d4a382d9ed8548b3be3d91acd4", "text": "We present a new descriptor for tactile 3D object classification. It is invariant to object movement and simple to construct, using only the relative geometry of points on the object surface. We demonstrate successful classification of 185 objects in 10 categories, at sparse to dense surface sampling rate in point cloud simulation, with an accuracy of 77.5% at the sparsest and 90.1% at the densest. In a physics-based simulation, we show that contact clouds resembling the object shape can be obtained by a series of gripper closures using a robotic hand equipped with sparse tactile arrays. Despite sparser sampling of the object's surface, classification still performs well, at 74.7%. On a real robot, we show the ability of the descriptor to discriminate among different object instances, using data collected by a tactile hand.", "title": "" }, { "docid": "5b76ef357e706d81b31fd9fabb8ea685", "text": "This paper reports the design and development of aluminum nitride (AlN) piezoelectric RF resonant voltage amplifiers for Internet of Things (IoT) applications. These devices can provide passive and highly frequency selective voltage gain to RF backends with a capacitive input to drastically enhance sensitivity and to reduce power consumption of the transceiver. Both analytical and finite element models (FEM) have been utilized to identify the optimal designs. Consequently, an AlN voltage amplifier with an open circuit gain of 7.27 and a fractional bandwidth (FBW) of 0.11 % has been demonstrated. This work provides a material-agnostic framework for analytically optimizing piezoelectric voltage amplifiers.", "title": "" }, { "docid": "7ee31d080b3cd7632c25c22b378e6d91", "text": "Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such “out-of-equilibrium” behavior is a consequence of highly nonisotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims. This article summarizes the findings in [1]. See the longer version for background, detailed results and proofs.", "title": "" }, { "docid": "bed5efa3e268ef0fd2f3ae750b26aad4", "text": "In this paper, we describe our recent results in the development of a new class of soft, continuous backbone (“continuum”) robot manipulators. Our work is strongly motivated by the dexterous appendages found in cephalopods, particularly the arms and suckers of octopus, and the arms and tentacles of squid. Our ongoing investigation of these animals reveals interesting and unexpected functional aspects of their structure and behavior. The arrangement and dynamic operation of muscles and connective tissue observed in the arms of a variety of octopus species motivate the underlying design approach for our soft manipulators. These artificial manipulators feature biomimetic actuators, including artificial muscles based on both electro-active polymers (EAP) and pneumatic (McKibben) muscles. They feature a “clean” continuous backbone design, redundant degrees of freedom, and exhibit significant compliance that provides novel operational capacities during environmental interaction and object manipulation. The unusual compliance and redundant degrees of freedom provide strong potential for application to delicate tasks in cluttered and/or unstructured environments. Our aim is to endow these compliant robotic mechanisms with the diverse and dexterous grasping behavior observed in octopuses. To this end, we are conducting fundamental research into the manipulation tactics, sensory biology, and neural control of octopuses. This work in turn leads to novel approaches to motion planning and operator interfaces for the robots. The paper describes the above efforts, along with the results of our development of a series of continuum tentacle-like robots, demonstrating the unique abilities of biologically-inspired design.", "title": "" }, { "docid": "140d6d345aa6d486a30e596dde25a8ef", "text": "This research uses the absorptive capacity (ACAP) concept as a theoretical lens to study the effect of e-business upon the competitive performance of SMEs, addressing the following research issue: To what extent are manufacturing SMEs successful in developing their potential and realized ACAP in line with their entrepreneurial orientation? A survey study of 588 manufacturing SMEs found that their e-business capabilities, considered as knowledge acquisition and assimilation capabilities have an indirect effect on their competitive performance that is mediated by their knowledge transformation and exploitation capabilities, and insofar as these capabilities are developed as a result of a more entrepreneurial orientation on their part. Finally, the effect of this orientation on the SMEs' competitive performance appears to be totally mediated by their ACAP.", "title": "" }, { "docid": "e41ae766a1995f918184efb73b2212b7", "text": "Among the signature schemes most widely deployed in practice are the DSA (Digital Signature Algorithm) and its elliptic curves variant ECDSA. They are represented in many international standards, including IEEE P1363, ANSI X9.62, and FIPS 186-4. Their popularity stands in stark contrast to the absence of rigorous security analyses: Previous works either study modified versions of (EC)DSA or provide a security analysis of unmodified ECDSA in the generic group model. Unfortunately, works following the latter approach assume abstractions of non-algebraic functions over generic groups for which it remains unclear how they translate to the security of ECDSA in practice. For instance, it has been pointed out that prior results in the generic group model actually establish strong unforgeability of ECDSA, a property that the scheme de facto does not possess. As, further, no formal results are known for DSA, understanding the security of both schemes remains an open problem. In this work we propose GenericDSA, a signature framework that subsumes both DSA and ECDSA in unmodified form. It carefully models the \"modulo q\" conversion function of (EC)DSA as a composition of three independent functions. The two outer functions mimic algebraic properties in the function's domain and range, the inner one is modeled as a bijective random oracle. We rigorously prove results on the security of GenericDSA that indicate that forging signatures in (EC)DSA is as hard as solving discrete logarithms. Importantly, our proofs do not assume generic group behavior.", "title": "" }, { "docid": "07300a47b34574012b6b7efbd0bb66ea", "text": "The incidence of diabetes and its associated micro- and macrovascular complications is greatly increasing worldwide. The most prevalent vascular complications of both type 1 and type 2 diabetes include nephropathy, retinopathy, neuropathy and cardiovascular diseases. Evidence suggests that both genetic and environmental factors are involved in these pathologies. Clinical trials have underscored the beneficial effects of intensive glycaemic control for preventing the progression of complications. Accumulating evidence suggests a key role for epigenetic mechanisms such as DNA methylation, histone post-translational modifications in chromatin, and non-coding RNAs in the complex interplay between genes and the environment. Factors associated with the pathology of diabetic complications, including hyperglycaemia, growth factors, oxidant stress and inflammatory factors can lead to dysregulation of these epigenetic mechanisms to alter the expression of pathological genes in target cells such as endothelial, vascular smooth muscle, retinal and cardiac cells, without changes in the underlying DNA sequence. Furthermore, long-term persistence of these alterations to the epigenome may be a key mechanism underlying the phenomenon of ‘metabolic memory’ and sustained vascular dysfunction despite attainment of glycaemic control. Current therapies for most diabetic complications have not been fully efficacious, and hence a study of epigenetic mechanisms that may be involved is clearly warranted as they can not only shed novel new insights into the pathology of diabetic complications, but also lead to the identification of much needed new drug targets. In this review, we highlight the emerging role of epigenetics and epigenomics in the vascular complications of diabetes and metabolic memory.", "title": "" }, { "docid": "7cfffa8e9d1e1fb39082c5aba75034b3", "text": "BACKGROUND\nAttempted separation of craniopagus twins has continued to be associated with devastating results since the first partially successful separation with one surviving twin in 1952. To understand the factors that contribute to successful separation in the modern era of neuroimaging and modern surgical techniques, the authors reviewed and analyzed cases reported since 1995.\n\n\nMETHODS\nAll reported cases of craniopagus twin separation attempts from 1995 to 2015 were identified using PubMed (n = 19). In addition, the Internet was searched for additional unreported separation attempts (n = 5). The peer-reviewed cases were used to build a categorical database containing information on each twin pair, including sex; date of birth; date of surgery; multiple- versus single-stage surgery; angular versus vertical conjoining; nature of shared cerebral venous system; and the presence of other comorbidities identified as cardiovascular, genitourinary, and craniofacial. The data were analyzed to find factors associated with successful separation (survival of both twins at postoperative day 30).\n\n\nRESULTS\nVertical craniopagus is associated with successful separation (p < 0.001). No statistical significance was attributed to the nature of the shared cerebral venous drainage or the other variables examined. Multiple-stage operations and surgery before 12 months of age are associated with a trend toward statistical significance for successful separation.\n\n\nCONCLUSIONS\nThe authors' analysis indicates that vertical craniopagus twins have the highest likelihood of successful separation. Additional factors possibly associated with successful separation include the nature of the shared sinus system, surgery at a young age, and the use of staged separations.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.", "title": "" }, { "docid": "2afcc7c1fb9dadc3d46743c991e15bac", "text": "This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design", "title": "" }, { "docid": "beadaf1625fc4e07d3511d46ee68e6e4", "text": "The prevention of accidents is one of the most important goals of ad hoc networks in smart cities. When an accident happens, dynamic sensors (e.g., citizens with smart phones or tablets, smart vehicles and buses, etc.) could shoot a video clip of the accident and send it through the ad hoc network. With a video message, the level of seriousness of the accident could be much better evaluated by the authorities (e.g., health care units, police and ambulance drivers) rather than with just a simple text message. Besides, other citizens would be rapidly aware of the incident. In this way, smart dynamic sensors could participate in reporting a situation in the city using the ad hoc network so it would be possible to have a quick reaction warning citizens and emergency units. The deployment of an efficient routing protocol to manage video-warning messages in mobile Ad hoc Networks (MANETs) has important benefits by allowing a fast warning of the incident, which potentially can save lives. To contribute with this goal, we propose a multipath routing protocol to provide video-warning messages in MANETs using a novel game-theoretical approach. As a base for our work, we start from our previous work, where a 2-players game-theoretical routing protocol was proposed to provide video-streaming services over MANETs. In this article, we further generalize the analysis made for a general number of N players in the MANET. Simulations have been carried out to show the benefits of our proposal, taking into account the mobility of the nodes and the presence of interfering traffic. Finally, we also have tested our approach in a vehicular ad hoc network as an incipient start point to develop a novel proposal specifically designed for VANETs.", "title": "" }, { "docid": "58891611a4d9992a671f620a8f753e71", "text": "Many existing structures located in seismic regions are inadequate based on current seismic design codes. In addition, a number of major earthquakes during recent years have underscored the importance of mitigation to reduce seismic risk. Seismic retrofitting of existing structures is one of the most effective methods of reducing this risk. In recent years, a significant amount of research has been devoted to the study of various strengthening techniques to enhance the seismic performance of RC structures. However, the seismic performance of the structure may not be improved by retrofitting or rehabilitation unless the engineer selects an appropriate intervention technique based on seismic evaluation of the structure. Therefore, the basic requirements of rehabilitation and investigations of various retrofit techniques should be considered before selecting retrofit schemes. In this report, the characteristics of various intervention techniques are discussed and the relationship between retrofit and structural characteristics is also described. In addition, several case study structures for which retrofit techniques have been applied are presented.", "title": "" }, { "docid": "6483733f9cfd2eaacb5f368e454416db", "text": "A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.", "title": "" }, { "docid": "e14cd8d955d80591f905b3858c9b5d09", "text": "With the advent of the Internet of Things (IoT), security has emerged as a major design goal for smart connected devices. This explosion in connectivity created a larger attack surface area. Software-based approaches have been applied for security purposes; however, these methods must be extended with security-oriented technologies that promote hardware as the root of trust. The ARM TrustZone can enable trusted execution environments (TEEs), but existing solutions disregard real-time needs. Here, the authors demonstrate why TrustZone is becoming a reference technology for securing IoT edge devices, and how enhanced TEEs can help meet industrial IoT applications real-time requirements.", "title": "" }, { "docid": "21502c42ef7a8e342334b93b1b5069d6", "text": "Motivations to engage in retail online shopping can include both utilitarian and hedonic shopping dimensions. To cater to these consumers, online retailers can create a cognitively and esthetically rich shopping environment, through sophisticated levels of interactive web utilities and features, offering not only utilitarian benefits and attributes but also providing hedonic benefits of enjoyment. Since the effect of interactive websites has proven to stimulate online consumer’s perceptions, this study presumes that websites with multimedia rich interactive utilities and features can influence online consumers’ shopping motivations and entice them to modify or even transform their original shopping predispositions by providing them with attractive and enhanced interactive features and controls, thus generating a positive attitude towards products and services offered by the retailer. This study seeks to explore the effects of Web interactivity on online consumer behavior through an attitudinal model of technology acceptance.", "title": "" } ]
scidocsrr
2b2f2af64ba9a552e51b0632e6cf170c
BASE: Using Abstraction to Improve Fault Tolerance
[ { "docid": "b8b7abcef8e23f774bd4e74067a27e6f", "text": "This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems. The overall conclusion is that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware. Copyright  1989 Digital Equipment Corporation d i g i t a l Western Research Laboratory 100 Hamilton Avenue Palo Alto, California 94301 USA", "title": "" } ]
[ { "docid": "83d50f7c66b14116bfa627600ded28d6", "text": "Diet can affect cognitive ability and behaviour in children and adolescents. Nutrient composition and meal pattern can exert immediate or long-term, beneficial or adverse effects. Beneficial effects mainly result from the correction of poor nutritional status. For example, thiamin treatment reverses aggressiveness in thiamin-deficient adolescents. Deleterious behavioural effects have been suggested; for example, sucrose and additives were once suspected to induce hyperactivity, but these effects have not been confirmed by rigorous investigations. In spite of potent biological mechanisms that protect brain activity from disruption, some cognitive functions appear sensitive to short-term variations of fuel (glucose) availability in certain brain areas. A glucose load, for example, acutely facilitates mental performance, particularly on demanding, long-duration tasks. The mechanism of this often described effect is not entirely clear. One aspect of diet that has elicited much research in young people is the intake/omission of breakfast. This has obvious relevance to school performance. While effects are inconsistent in well-nourished children, breakfast omission deteriorates mental performance in malnourished children. Even intelligence scores can be improved by micronutrient supplementation in children and adolescents with very poor dietary status. Overall, the literature suggests that good regular dietary habits are the best way to ensure optimal mental and behavioural performance at all times. Then, it remains controversial whether additional benefit can be gained from acute dietary manipulations. In contrast, children and adolescents with poor nutritional status are exposed to alterations of mental and/or behavioural functions that can be corrected, to a certain extent, by dietary measures.", "title": "" }, { "docid": "e82e4599a7734c9b0292a32f551dd411", "text": "Generating a text abstract from a set of documents remains a challenging task. The neural encoder-decoder framework has recently been exploited to summarize single documents, but its success can in part be attributed to the availability of large parallel data automatically acquired from the Web. In contrast, parallel data for multi-document summarization are scarce and costly to obtain. There is a pressing need to adapt an encoder-decoder model trained on single-document summarization data to work with multiple-document input. In this paper, we present an initial investigation into a novel adaptation method. It exploits the maximal marginal relevance method to select representative sentences from multi-document input, and leverages an abstractive encoder-decoder model to fuse disparate sentences to an abstractive summary. The adaptation method is robust and itself requires no training data. Our system compares favorably to state-of-the-art extractive and abstractive approaches judged by automatic metrics and human assessors.", "title": "" }, { "docid": "6bb9df7f37426563a373fae6dd46db66", "text": "Hyper-heuristics comprise a set of approaches that are motivated (at least in part) by the goal of automating the design of heuristic methods to solve hard computational search problems. An underlying strategic research challenge is to develop more generally applicable search methodologies. The term hyper-heuristic is relatively new; it was first used in 2000 to describe heuristics to choose heuristics in the context of combinatorial optimisation. However, the idea of automating the design of heuristics is not new; it can be traced back to the 1960s. The definition of hyper-heuristics has been recently extended to refer to a search method or learning mechanism for selecting or generating heuristics to solve computational search problems. Two main hyper-heuristic categories can be considered: heuristic selection and heuristic generation. The distinguishing feature of hyper-heuristics is that they operate on a search space of heuristics (or heuristic components) rather than directly on the search space of solutions to the underlying problem that is being addressed. This paper presents a critical discussion of the scientific literature on hyper-heuristics including their origin and intellectual roots, a detailed account of the main types of approaches, and an overview of some related areas. Current research trends and directions for future research are also discussed. Journal of the Operational Research Society advance online publication, 10 July 2013; doi:10.1057/jors.2013.71", "title": "" }, { "docid": "2e1385c5398196fbe9a108f241712c01", "text": "The concept of deliberate practice was introduced to explain exceptional performance in domains such as music and chess. We apply deliberate practice theory to intermediate-level performance in typing, an activity that many people pursue on a regular basis. Sixty university students with several years typing experience participated in laboratory sessions that involved the assessment of abilities, a semistructured interview on typing experience as well as various typing tasks. In line with traditional theories of skill acquisition, experience (amount of typing since introduction to the keyboard) was related to typing performance. A perceptual speed test (digit-symbol substitution) and a measure of motor abilities (tapping) were not significantly related to performance. In line with deliberate practice theory, the highest level of performance was reported among participants who had attended a typing class in the past and who reported to adopt the goal of typing quickly during everyday typing. Findings suggest that even after several years of experience engagement in an everyday activity can serve as an opportunity for further skill improvement if individuals are willing to push themselves.", "title": "" }, { "docid": "bd6f23972644f6239ab1a40e9b20aa1e", "text": "This paper presents a machine-learning software solution that performs a multi-dimensional prediction of QoE (Quality of Experience) based on network-related SIFs (System Influence Factors) as input data. The proposed solution is verified through experimental study based on video streaming emulation over LTE (Long Term Evolution) which allows the measurement of network-related SIF (i.e., delay, jitter, loss), and subjective assessment of MOS (Mean Opinion Score). Obtained results show good performance of proposed MOS predictor in terms of mean prediction error and thereby can serve as an encouragement to implement such solution in all-IP (Internet Protocol) real environment.", "title": "" }, { "docid": "4277894ef2bf88fd3a78063a8b0cc7fe", "text": "This paper deals with a design method of LCL filter for grid-connected three-phase PWM voltage source inverters (VSI). By analyzing the total harmonic distortion of the current (THDi) in the inverter-side inductor and the ripple attenuation factor of the current (RAF) injected to the grid through the LCL network, the parameter of LCL can be clearly designed. The described LCL filter design method is verified by showing a good agreement between the target current THD and the actual one through simulation and experiment.", "title": "" }, { "docid": "1e69c1aef1b194a27d150e45607abd5a", "text": "Methods of semantic relatedness are essential for wide range of tasks such as information retrieval and text mining. This paper, concerned with these automated methods, attempts to improve Gloss Vector semantic relatedness measure for more reliable estimation of relatedness between two input concepts. Generally, this measure by considering frequency cut-off for big rams tries to remove low and high frequency words which usually do not end up being significant features. However, this naive cutting approach can lead to loss of valuable information. By employing point wise mutual information (PMI) as a measure of association between features, we will try to enforce the foregoing elimination step in a statistical fashion. Applying both approaches to the biomedical domain, using MEDLINE as corpus, MeSH as thesaurus, and available reference standard of 311 concept pairs manually rated for semantic relatedness, we will show that PMI for removing insignificant features is more effective approach than frequency cut-off.", "title": "" }, { "docid": "ac6430e097fb5a7dc1f7864f283dcf47", "text": "In the task of Object Recognition, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose regression using these approaches has received relatively much less attention. In this paper we show how deep architectures, specifically Convolutional Neural Networks (CNN), can be adapted to the task of simultaneous categorization and pose estimation of objects. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations of CNNs represent object pose information and how this contradicts object category representations. We extensively experiment on two recent large and challenging multi-view datasets. Our models achieve better than state-of-the-art performance on both datasets.", "title": "" }, { "docid": "076c5e6d8d6822988c64cabf8e6d4289", "text": "This paper presents the design of a dual-polarized log.-periodic four arm antenna bent on a conical MID substrate. The bending of a planar structure in free space is highlighted and the resulting effects on the input impedance and radiation characteristic are analyzed. The subsequent design of the UWB compliant prototype is introduced. An adequate agreement between simulated and measured performance can be observed. The antenna provides an input matching of better than −8 dB over a frequency range from 3GHz to 9GHz. The antenna pattern is characterized by a radiation with two linear, orthogonal polarizations and a front-to-back ratio of 6 dB. A maximum gain of 5.6 dBi is achieved at 5.5GHz. The pattern correlation coefficients confirm the suitability of this structure for diversity and MIMO applications. The overall antenna diameter and height are 50mm and 24mm respectively. It could therefore be used as a surface mounted or ceiling antenna in buildings, vehicles or aircrafts for communication systems.", "title": "" }, { "docid": "64330f538b3d8914cbfe37565ab0d648", "text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.", "title": "" }, { "docid": "132af507da095adf4f07ec0248d34cc2", "text": "This project is to design eight bit division algorithm program by using Xilinx ISE 10.1 software for simulation algorithm circuit partitioning through hardware Field Programmable Gate Array (FPGA). The algorithms are divide 8-bit dividend by 8-bit divisor for input and get the result 16-bit for the output. Circuit partitioning algorithms eight bits used to implement the distribution process for each program using the arithmetic and logic unit operations, called (ALU). All these operations using Verilog language in a program to be displayed on (LED) using the FPGA board. FPGA is a semiconductor device containing programmable logic components called \"logic blocks\", and programmable. Logic block can be programmed to perform the functions of basic logic gates such as AND, and XOR, or more complex combination of functions such as decoders or simple mathematical functions such as addition, subtraction, multiplication, and division (+, -, x, ÷). Finally, this project outlines the design and implementation of a new hardware divisor for performing 8-bit division. The error probability function of this division algorithm is fully characterized and contrasted against existing hardware division algorithms.", "title": "" }, { "docid": "ca88e6aab6f65f04bfc7a7eb470a31e1", "text": "We construct protocols for secure multiparty computation with the help of a computationally powerful party, namely the “cloud”. Our protocols are simultaneously efficient in a number of metrics: • Rounds: our protocols run in 4 rounds in the semi-honest setting, and 5 rounds in the malicious setting. • Communication: the number of bits exchanged in an execution of the protocol is independent of the complexity of function f being computed, and depends only on the length of the inputs and outputs. • Computation: the computational complexity of all parties is independent of the complexity of the function f , whereas that of the cloud is linear in the size of the circuit computing f . In the semi-honest case, our protocol relies on the “ring learning with errors” (RLWE) assumption, whereas in the malicious case, security is shown under the Ring LWE assumption as well as the existence of simulation-extractable NIZK proof systems and succinct non-interactive arguments. In the malicious setting, we also relax the communication and computation requirements above, and only require that they be “small” – polylogarithmic in the computation size and linear in the size of the joint size of the inputs. Our constructions leverage the key homomorphic property of the recent fully homomorphic encryption scheme of Brakerski and Vaikuntanathan (CRYPTO 2011, FOCS 2011). Namely, these schemes allow combining encryptions of messages under different keys to produce an encryption (of the sum of the messages) under the sum of the keys. We also design an efficient, non-interactive threshold decryption protocol for these fully homomorphic encryption schemes. ∗This work was partially supported by the Check Point Institute for Information Security and by the Israeli Centers of Research Excellence (I-CORE) program (center No. 4/11). †This work was partially supported by an NSERC Discovery Grant, by DARPA under Agreement number FA875011-2-0225. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government.", "title": "" }, { "docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522", "text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.", "title": "" }, { "docid": "fc1009e9515d83166e97e4e01ae9ca69", "text": "In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the \"one-shot-learning\" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for \"user independent\" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.", "title": "" }, { "docid": "3e8535bc48ce88ba6103a68dd3ad1d5d", "text": "This letter reports the concept and design of the active-braid, a novel bioinspired continuum manipulator with the ability to contract, extend, and bend in three-dimensional space with varying stiffness. The manipulator utilizes a flexible crossed-link helical array structure as its main supporting body, which is deformed by using two radial actuators and a total of six longitudinal tendons, analogously to the three major types of muscle layers found in muscular hydrostats. The helical array structure ensures that the manipulator behaves similarly to a constant volume structure (expanding while shortening and contracting while elongating). Numerical simulations and experimental prototypes are used in order to evaluate the feasibility of the concept.", "title": "" }, { "docid": "6dbe972f08097355b32685c5793f853a", "text": "BACKGROUND/AIMS\nRheumatoid arthritis (RA) is a serious health problem resulting in significant morbidity and disability. Tai Chi may be beneficial to patients with RA as a result of effects on muscle strength and 'mind-body' interactions. To obtain preliminary data on the effects of Tai Chi on RA, we conducted a pilot randomized controlled trial. Twenty patients with functional class I or II RA were randomly assigned to Tai Chi or attention control in twice-weekly sessions for 12 weeks. The American College of Rheumatology (ACR) 20 response criterion, functional capacity, health-related quality of life and the depression index were assessed.\n\n\nRESULTS\nAt 12 weeks, 5/10 patients (50%) randomized to Tai Chi achieved an ACR 20% response compared with 0/10 (0%) in the control (p = 0.03). Tai Chi had greater improvement in the disability index (p = 0.01), vitality subscale of the Medical Outcome Study Short Form 36 (p = 0.01) and the depression index (p = 0.003). Similar trends to improvement were also observed for disease activity, functional capacity and health-related quality of life. No adverse events were observed and no patients withdrew from the study.\n\n\nCONCLUSION\nTai Chi appears safe and may be beneficial for functional class I or II RA. These promising results warrant further investigation into the potential complementary role of Tai Chi for treatment of RA.", "title": "" }, { "docid": "9b3adcf557ce2d3f6b3cb717694f9596", "text": "BACKGROUND\nVariation in physician adoption of new medications is poorly understood. Traditional approaches (eg, measuring time to first prescription) may mask substantial heterogeneity in technology adoption.\n\n\nOBJECTIVE\nApply group-based trajectory models to examine the physician adoption of dabigratran, a novel anticoagulant.\n\n\nMETHODS\nA retrospective cohort study using prescribing data from IMS Xponent™ on all Pennsylvania physicians regularly prescribing anticoagulants (n=3911) and data on their characteristics from the American Medical Association Masterfile. We examined time to first dabigatran prescription and group-based trajectory models to identify adoption trajectories in the first 15 months. Factors associated with rapid adoption were examined using multivariate logistic regressions.\n\n\nOUTCOMES\nTrajectories of monthly share of oral anticoagulant prescriptions for dabigatran.\n\n\nRESULTS\nWe identified 5 distinct adoption trajectories: 3.7% rapidly and extensively adopted dabigatran (adopting in ≤3 mo with 45% of prescriptions) and 13.4% were rapid and moderate adopters (≤3 mo with 20% share). Two groups accounting for 21.6% and 16.1% of physicians, respectively, were slower to adopt (6-10 mo post-introduction) and dabigatran accounted for <10% share. Nearly half (45.2%) of anticoagulant prescribers did not adopt dabigatran. Cardiologists were much more likely than primary care physicians to rapidly adopt [odds ratio (OR)=12.2; 95% confidence interval (CI), 9.27-16.1] as were younger prescribers (age 36-45 y: OR=1.49, 95% CI, 1.13-1.95; age 46-55: OR=1.34, 95% CI, 1.07-1.69 vs. >55 y).\n\n\nCONCLUSIONS\nTrajectories of physician adoption of dabigatran were highly variable with significant differences across specialties. Heterogeneity in physician adoption has potential implications for the cost and effectiveness of treatment.", "title": "" }, { "docid": "91f390e8ea6c931dff1e1d171cede590", "text": "Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network.", "title": "" }, { "docid": "7700935aeb818b8c863747c0624764db", "text": "The Internal Model Control (IMC) is a transparent framework for designing and tuning the controller. The proportional-integral (PI) and proportional-integral derivative (PID) controllers have ability to meet most of the control objectives and this led to their widespread acceptance in the control industry. In this paper the IMC-based PID controller is designed. IMC-based PID tuning method is a trade-off between closed-loop performance and robustness to model inaccuracies achieved with a single tuning parameter λ. The IMC-PID controller shows good set-point tracking property. In this paper, Robust stability synthesis of a class of uncertain parameter varying firstorder time-delay systems is presented. The output response characteristics using IMC based PID controller along with characteristics using automatic PID tuner are compared. The performance of IMC based PID for both stable, unstable as well as for the processes with time delay is studied and discussed. Various Order reduction techniques are utilized to reduce higher order polynomial into smaller order transfer function. This paper presents results of the implementation of an Internal Model Control (IMC) based PID controller for the level control application to meet robust performance and to achieve the set point tracking and disturbance rejection.", "title": "" }, { "docid": "b53b1bf8c9cd562ee3bf32324d7ceae3", "text": "In this paper we present our results on using electromyographic (EMG) sensor arrays for finger gesture recognition. Sensing muscle activity allows to capture finger motion without placing sensors directly at the hand or fingers and thus may be used to build unobtrusive body-worn interfaces. We use an electrode array with 192 electrodes to record a high-density EMG of the upper forearm muscles. We present in detail a baseline system for gesture recognition on our dataset, using a naive Bayes classifier to discriminate the 27 gestures. We recorded 25 sessions from 5 subjects. We report an average accuracy of 90% for the within-session scenario, showing the feasibility of the EMG approach to discriminate a large number of subtle gestures. We analyze the effect of the number of used electrodes on the recognition performance and show the benefit of using high numbers of electrodes. Cross-session recognition typically suffers from electrode position changes from session to session. We present two methods to estimate the electrode shift between sessions based on a small amount of calibration data and compare it to a baseline system with no shift compensation. The presented methods raise the accuracy from 59% baseline accuracy to 75% accuracy after shift compensation. The dataset is publicly available.", "title": "" } ]
scidocsrr
d4ae2ecbedc5d4f4ad132ea12c164a88
THE SELFIE PHENOMENON : THE IDEA OF SELF-PRESENTATION AND ITS IMPLICATIONS AMONG YOUNG WOMEN A
[ { "docid": "157a96adf7909134a14f8abcc7a2655c", "text": "Social networking sites like MySpace, Facebook, and StudiVZ are popular means of communicating personality. Recent theoretical and empirical considerations of homepages and Web 2.0 platforms show that impression management is a major motive for actively participating in social networking sites. However, the factors that determine the specific form of self-presentation and the extent of self-disclosure on the Internet have not been analyzed. In an exploratory study, we investigated the relationship between self-reported (offline) personality traits and (online) self-presentation in social networking profiles. A survey among 58 users of the German Web 2.0 site, StudiVZ.net, and a content analysis of the respondents’ profiles showed that self-efficacy with regard to impression management is strongly related to the number of virtual friends, the level of profile detail, and the style of the personal photo. The results also indicate a slight influence of extraversion, whereas there was no significant effect for self-esteem.", "title": "" } ]
[ { "docid": "66b154f935e66a78895e17318921f36a", "text": "Metaheuristic algorithms have been a very important topic in computer science since the start of evolutionary computing the Genetic Algorithms 1950s. By now these metaheuristic algorithms have become a very large family with successful applications in industry. A challenge which is always pondered on, is finding the suitable metaheuristic algorithm for a certain problem. The choice sometimes may have to be made after trying through many experiments or by the experiences of human experts. As each of the algorithms have their own strengths in solving different kinds of problems, in this paper we propose a framework of metaheuristic brick-up system. The flexibility of brick-up (like Lego) offers users to pick a collection of fundamental functions of metaheuristic algorithms that were known to perform well in the past. In order to verify this brickup concept, in this paper we propose to use the Monte Carlo method with upper confidence bounds applied to a decision tree in selecting appropriate functional pieces. This paper validates the basic concept and discusses the further works.", "title": "" }, { "docid": "890b1ed209b3e34c5b460dce310ee08f", "text": "INTRODUCTION\nThe adequate use of compression in venous leg ulcer treatment is equally important to patients as well as clinicians. Currently, there is a lack of clarity on contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients.\n\n\nMETHODS\nThe project aimed to optimize prevention, treatment and maintenance approaches by recognizing contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients. A literature review was conducted of current guidelines on venous leg ulcer prevention, management and maintenance.\n\n\nRESULTS\nSearches took place from 29th February 2016 to 30th April 2016 and were prospectively limited to publications in the English and German languages and publication dates were between January 2009 and April 2016. Twenty Guidelines, clinical pathways and consensus papers on compression therapy for venous leg ulcer treatment and for venous disease, were included. Guidelines agreed on the following absolute contraindications: Arterial occlusive disease, heart failure and ankle brachial pressure index (ABPI) <0.5, but gave conflicting recommendations on relative contraindications, risks and adverse events. Moreover definitions were unclear and not consistent.\n\n\nCONCLUSIONS\nEvidence-based guidance is needed to inform clinicians on risk factor, adverse effects, complications and contraindications. ABPI values need to be specified and details should be given on the type of compression that is safe to use. Ongoing research challenges the present recommendations, shifting some contraindications into a list of potential indications. Complications of compression can be prevented when adequate assessment is performed and clinicians are skilled in applying compression.", "title": "" }, { "docid": "4129881d5ff6f510f6deb23fd5b29afa", "text": "Childbirth is an intricate process which is marked by an increased cervical dilation rate caused due to steady increments in the frequency and strength of uterine contractions. The contractions may be characterized by its strength, duration and frequency (count) - which are monitored through Tocography. However, the procedure is prone to subjectivity and an automated approach for the classification of the contractions is needed. In this paper, we use three different Weighted K-Nearest Neighbor classifiers and Decision Trees to classify the contractions into three types: Mild, Moderate and Strong. Further, we note the fact that our training data consists of fewer samples of Contractions as compared to those of Non-contractions - resulting in “Class Imbalance”. Hence, we use the Synthetic Minority Oversampling Technique (SMOTE) in conjunction with the K-NN classifier and Decision Trees to alleviate the problems of the same. The ground truth for Tocography signals was established by a doctor having an experience of 36 years in Obstetrics and Gynaecology. The annotations are in three categories: Mild (33 samples), Moderate (64 samples) and Strong (96 samples), amounting to a total of 193 contractions whereas the number of Non-contraction samples was 1217. Decision Trees using SMOTE performed the best with accuracies of 95%, 98.25% and 100% for the aforementioned categories, respectively. The sensitivities achieved for the same are 96.67%, 96.52% and 100% whereas the specificities amount to 93.33%, 100% and 100%, respectively. Our method may be used to monitor the labour progress efficiently.", "title": "" }, { "docid": "cf3048e512d5d4eab62eef01627fe8d7", "text": "In this paper, we present simulation results and analysis of 3-D magnetic flux leakage (MFL) signals due to the occurrence of a surface-breaking defect in a ferromagnetic specimen. The simulations and analysis are based on a magnetic dipole-based analytical model, presented in a previous paper. We exploit the tractability of the model and its amenability to simulation to analyze properties of the model as well as of the MFL fields it predicts, such as scale-invariance, effect of lift-off and defect shape, the utility of the tangential MFL component, and the sensitivity of MFL fields to parameters. The simulations and analysis show that the tangential MFL component is indeed a potentially critical part of MFL testing. It is also shown that the MFL field of a defect varies drastically with lift-off. We also exploit the model to develop a lift-off compensation technique which enables the prediction of the size of the defect for a range of lift-off values.", "title": "" }, { "docid": "bfe8e4093219080ef7c377a67184ff00", "text": "A clothoid has the property that its curvature varies linearly with arclength. This is a useful feature for the path of a vehicle whose turning radius is controlled as a linear function of the distance travelled. Highways, railways and the paths of car-like robots may be composed of straight line segments, clothoid segments and circular arcs. Control polylines are used in computer aided design and computer aided geometric design applications to guide composite curves during the design phase. This article examines the use of a control polyline to guide a curve composed of segments of clothoids, straight lines, and circular arcs. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3f48327ca2125df3a6da0c1e54131013", "text": "Background: We investigated the value of magnetic resonance imaging (MRI) in the evaluation of sex-reassignment surgery in male-to-female transsexual patients. Methods: Ten male-to-female transsexual patients who underwent sex-reassignment surgery with inversion of combined penile and scrotal skin flaps for vaginoplasty were examined after surgery with MRI. Turbo spin-echo T2-weighted and spin-echo T1-weighted images were obtained in sagittal, coronal, and axial planes with a 1.5-T superconductive magnet. Images were acquired with and without an inflatable silicon vaginal tutor. The following parameters were evaluated: neovaginal depth, neovaginal inclination in the sagittal plane, presence of remnants of the corpus spongiosum and corpora cavernosa, and thickness of the rectovaginal septum. Results: The average neovaginal depth was 7.9 cm (range = 5–10 cm). The neovagina had a correct oblique inclination in the sagittal plane in four patients, no inclination in five, and an incorrect inclination in one. In seven patients, MRI showed remnants of the corpora cavernosa and/or of the corpus spongiosum; in three patients, no remnants were detected. The average thickness of the rectovaginal septum was 4 mm (range = 3–6 mm). Conclusion: MRI allows a detailed assessment of the pelvic anatomy after genital reconfiguration and provides information that can help the surgeon to adopt the most correct surgical approach.", "title": "" }, { "docid": "c3b691cd3671011278ecd30563b27245", "text": "We formalize weighted dependency parsing as searching for maximum spanning trees (MSTs) in directed graphs. Using this representation, the parsing algorithm of Eisner (1996) is sufficient for searching over all projective trees in O(n3) time. More surprisingly, the representation is extended naturally to non-projective parsing using Chu-Liu-Edmonds (Chu and Liu, 1965; Edmonds, 1967) MST algorithm, yielding anO(n2) parsing algorithm. We evaluate these methods on the Prague Dependency Treebank using online large-margin learning techniques (Crammer et al., 2003; McDonald et al., 2005) and show that MST parsing increases efficiency and accuracy for languages with non-projective dependencies.", "title": "" }, { "docid": "2b3929da96949056bc473e8da947cebe", "text": "This paper presents “Value-Difference Based Exploration” (VDBE), a method for balancing the exploration/exploitation dilemma inherent to reinforcement learning. The proposed method adapts the exploration parameter of ε-greedy in dependence of the temporal-difference error observed from value-function backups, which is considered as a measure of the agent’s uncertainty about the environment. VDBE is evaluated on a multi-armed bandit task, which allows for insight into the behavior of the method. Preliminary results indicate that VDBE seems to be more parameter robust than commonly used ad hoc approaches such as ε-greedy or softmax.", "title": "" }, { "docid": "8bbe111daad27eba937699e87e195ee5", "text": "The global impact of Alzheimer’s disease (AD) continues to increase, and focused efforts are needed to address this immense public health challenge. National leaders have set a goal to prevent or effectively treat AD by 2025. In this paper, we discuss the path to 2025, and what is feasible in this time frame given the realities and challenges of AD drug development, with a focus on disease-modifying therapies (DMTs). Under the current conditions, only drugs currently in late Phase 1 or later will have a chance of being approved by 2025. If pipeline attrition rates remain high, only a few compounds at best will meet this time frame. There is an opportunity to reduce the time and risk of AD drug development through an improvement in trial design; better trial infrastructure; disease registries of well-characterized participant cohorts to help with more rapid enrollment of appropriate study populations; validated biomarkers to better detect disease, determine risk and monitor disease progression as well as predict disease response; more sensitive clinical assessment tools; and faster regulatory review. To implement change requires efforts to build awareness, educate and foster engagement; increase funding for both basic and clinical research; reduce fragmented environments and systems; increase learning from successes and failures; promote data standardization and increase wider data sharing; understand AD at the basic biology level; and rapidly translate new knowledge into clinical development. Improved mechanistic understanding of disease onset and progression is central to more efficient AD drug development and will lead to improved therapeutic approaches and targets. The opportunity for more than a few new therapies by 2025 is small. Accelerating research and clinical development efforts and bringing DMTs to market sooner would have a significant impact on the future societal burden of AD. As these steps are put in place and plans come to fruition, e.g., approval of a DMT, it can be predicted that momentum will build, the process will be self-sustaining, and the path to 2025, and beyond, becomes clearer.", "title": "" }, { "docid": "4566a0adb9496f765eebe1dd3afb08e9", "text": "According to medical reports, cancers are big problems in the world society. In this paper we are supposed to predict breast cancer recurrence by multi-layer perceptron with two different outputs, a deep neural network as a feature extraction and multi-layer perceptron as a classifier, rough neural network with two different outputs, and finally, support vector machine. Then, we compare the results achieved by each method. It can be understood that rough neural network with two outputs leads to the highest accuracy and the lowest variance among other structures.", "title": "" }, { "docid": "1a38695797b921e35e0987eeed11c95d", "text": "We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded in data in this way may be easier to learn, generalize better, and be less dependent on accurate prior models than, for example, POMDP state representations. Building on prior work by Jaeger and by Rivest and Schapire, in this paper we compare and contrast a linear specialization of the predictive approach with the state representations used in POMDPs and in k-order Markov models. Ours is the first specific formulation of the predictive idea that includes both stochasticity and actions (controls). We show that any system has a linear predictive state representation with number of predictions no greater than the number of states in its minimal POMDP model. In predicting or controlling a sequence of observations, the concepts of state and state estimation inevitably arise. There have been two dominant approaches. The generative-model approach, typified by research on partially observable Markov decision processes (POMDPs), hypothesizes a structure for generating observations and estimates its state and state dynamics. The history-based approach, typified by k-order Markov methods, uses simple functions of past observations as state, that is, as the immediate basis for prediction and control. (The data flow in these two approaches are diagrammed in Figure 1.) Of the two, the generative-model approach is more general. The model's internal state gives it temporally unlimited memorythe ability to remember an event that happened arbitrarily long ago--whereas a history-based approach can only remember as far back as its history extends. The bane of generative-model approaches is that they are often strongly dependent on a good model of the system's dynamics. Most uses of POMDPs, for example, assume a perfect dynamics model and attempt only to estimate state. There are algorithms for simultaneously estimating state and dynamics (e.g., Chrisman, 1992), analogous to the Baum-Welch algorithm for the uncontrolled case (Baum et al., 1970), but these are only effective at tuning parameters that are already approximately correct (e.g., Shatkay & Kaelbling, 1997). observations (and actions) (a) state 1-----1-----1..rep'n observations¢E (and actions) / state t/' rep'n 1-step --+ . delays", "title": "" }, { "docid": "5ee490a307a0b6108701225170690386", "text": "An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in \"normal\" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity.", "title": "" }, { "docid": "923b4025d22bc146c53fb4c90f43ef72", "text": "In this paper we describe preliminary approaches for contentbased recommendation of Pinterest boards to users. We describe our representation and features for Pinterest boards and users, together with a supervised recommendation model. We observe that features based on latent topics lead to better performance than features based on userassigned Pinterest categories. We also find that using social signals (repins, likes, etc.) can improve recommendation quality.", "title": "" }, { "docid": "de6581719d2bc451695a77d43b091326", "text": "Keyphrases are useful for a variety of tasks in information retrieval systems and natural language processing, such as text summarization, automatic indexing, clustering/classification, ontology learning and building and conceptualizing particular knowledge domains, etc. However, assigning these keyphrases manually is time consuming and expensive in term of human resources. Therefore, there is a need to automate the task of extracting keyphrases. A wide range of techniques of keyphrase extraction have been proposed, but they are still suffering from the low accuracy rate and poor performance. This paper presents a state of the art of automatic keyphrase extraction approaches to identify their strengths and weaknesses. We also discuss why some techniques perform better than others and how can we improve the task of automatic keyphrase extraction.", "title": "" }, { "docid": "60a3538ec6a64af6f8fd447ed0fb79f5", "text": "Several Pinned Photodiode (PPD) CMOS Image Sensors (CIS) are designed, manufactured, characterized and exposed biased to ionizing radiation up to 10 kGy(SiO2 ). In addition to the usually reported dark current increase and quantum efficiency drop at short wavelengths, several original radiation effects are shown: an increase of the pinning voltage, a decrease of the buried photodiode full well capacity, a large change in charge transfer efficiency, the creation of a large number of Total Ionizing Dose (TID) induced Dark Current Random Telegraph Signal (DC-RTS) centers active in the photodiode (even when the Transfer Gate (TG) is accumulated) and the complete depletion of the Pre-Metal Dielectric (PMD) interface at the highest TID leading to a large dark current and the loss of control of the TG on the dark current. The proposed mechanisms at the origin of these degradations are discussed. It is also demonstrated that biasing (i.e., operating) the PPD CIS during irradiation does not enhance the degradations compared to sensors grounded during irradiation.", "title": "" }, { "docid": "fb729bf4edf25f082a4808bd6bb0961d", "text": "The paper reports some of the reasons behind the low use of Information and Communication Technology (ICT) by teachers. The paper has reviewed a number or studies from different parts of the world and paid greater attention to Saudi Arabia. The literature reveals a number of factors that hinder teachers’ use of ICT. This paper will focus on lack of access to technology, lack of training and lack of time.", "title": "" }, { "docid": "c23dc5fdb8c2d3b7314d895bbcb13832", "text": "Wireless power transfer (WPT) is a promising new solution to provide convenient and perpetual energy supplies to wireless networks. In practice, WPT is implementable by various technologies such as inductive coupling, magnetic resonate coupling, and electromagnetic (EM) radiation, for short-/mid-/long-range applications, respectively. In this paper, we consider the EM or radio signal enabled WPT in particular. Since radio signals can carry energy as well as information at the same time, a unified study on simultaneous wireless information and power transfer (SWIPT) is pursued. Specifically, this paper studies a multiple-input multiple-output (MIMO) wireless broadcast system consisting of three nodes, where one receiver harvests energy and another receiver decodes information separately from the signals sent by a common transmitter, and all the transmitter and receivers may be equipped with multiple antennas. Two scenarios are examined, in which the information receiver and energy receiver are separated and see different MIMO channels from the transmitter, or co-located and see the identical MIMO channel from the transmitter. For the case of separated receivers, we derive the optimal transmission strategy to achieve different tradeoffs for maximal information rate versus energy transfer, which are characterized by the boundary of a so-called rate-energy (R-E) region. For the case of co-located receivers, we show an outer bound for the achievable R-E region due to the potential limitation that practical energy harvesting receivers are not yet able to decode information directly. Under this constraint, we investigate two practical designs for the co-located receiver case, namely time switching and power splitting, and characterize their achievable R-E regions in comparison to the outer bound.", "title": "" }, { "docid": "960c37997d6138f8fd58728a1f976c9e", "text": "Hundreds of highly conserved distal cis-regulatory elements have been characterized so far in vertebrate genomes. Many thousands more are predicted on the basis of comparative genomics. However, in stark contrast to the genes that they regulate, in invertebrates virtually none of these regions can be traced by using sequence similarity, leaving their evolutionary origins obscure. Here we show that a class of conserved, primarily non-coding regions in tetrapods originated from a previously unknown short interspersed repetitive element (SINE) retroposon family that was active in the Sarcopterygii (lobe-finned fishes and terrestrial vertebrates) in the Silurian period at least 410 million years ago (ref. 4), and seems to be recently active in the ‘living fossil’ Indonesian coelacanth, Latimeria menadoensis. Using a mouse enhancer assay we show that one copy, 0.5 million bases from the neuro-developmental gene ISL1, is an enhancer that recapitulates multiple aspects of Isl1 expression patterns. Several other copies represent new, possibly regulatory, alternatively spliced exons in the middle of pre-existing Sarcopterygian genes. One of these, a more than 200-base-pair ultraconserved region, 100% identical in mammals, and 80% identical to the coelacanth SINE, contains a 31-amino-acid-residue alternatively spliced exon of the messenger RNA processing gene PCBP2 (ref. 6). These add to a growing list of examples in which relics of transposable elements have acquired a function that serves their host, a process termed ‘exaptation’, and provide an origin for at least some of the many highly conserved vertebrate-specific genomic sequences.", "title": "" }, { "docid": "103e3212f2d1302c7a901be0d3f46e31", "text": "This article explores dominant discourses surrounding male and female genital cutting. Over a similar period of time, these genital operations have separately been subjected to scrutiny and criticism. However, although critiques of female circumcision have been widely taken up, general public opinion toward male circumcision remains indifferent. This difference cannot merely be explained by the natural attributes and effects of these practices. Rather, attitudes toward genital cutting reflect historically and culturally specific understandings of the human body. In particular, I suggest that certain problematic understandings of male and female sexuality are deeply implicated in the dominant Western discourses on genital surgery.", "title": "" }, { "docid": "3c577fcd0d0876af4aa031affa3bd168", "text": "Domain-specific Internet of Things (IoT) applications are becoming more and more popular. Each of these applications uses their own technologies and terms to describe sensors and their measurements. This is a difficult task to help users build generic IoT applications to combine several domains. To explicitly describe sensor measurements in uniform way, we propose to enrich them with semantic web technologies. Domain knowledge is already defined in more than 200 ontology and sensor-based projects that we could reuse to build cross-domain IoT applications. There is a huge gap to reason on sensor measurements without a common nomenclature and best practices to ease the automation of generic IoT applications. We present our Machine-to-Machine Measurement (M3) framework and share lessons learned to improve existing standards such as oneM2M, ETSI M2M, W3C Web of Things and W3C Semantic Sensor Network.", "title": "" } ]
scidocsrr
e7c763e74e6cdc7271a89e4f56d5b5e2
Toward Natural Language Generation by Humans
[ { "docid": "7f110e4769b996de13afe63962bcf2d2", "text": "Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection.", "title": "" } ]
[ { "docid": "87133250a9e04fd42f5da5ecacd39d70", "text": "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.", "title": "" }, { "docid": "8f660dd12e7936a556322f248a9e2a2a", "text": "We develop and apply statistical topic models to software as a means of extracting concepts from source code. The effectiveness of the technique is demonstrated on 1,555 projects from SourceForge and Apache consisting of 113,000 files and 19 million lines of code. In addition to providing an automated, unsupervised, solution to the problem of summarizing program functionality, the approach provides a probabilistic framework with which to analyze and visualize source file similarity. Finally, we introduce an information-theoretic approach for computing tangling and scattering of extracted concepts, and present preliminary results", "title": "" }, { "docid": "875548b7dc303bef8efa8284216e010d", "text": "BACKGROUND\nGigantomastia is a breast disorder marked by exaggerated rapid growth of the breasts, generally bilaterally. Since this disorder is very rare and has been reported only in sparse case reports its etiology has yet to be fully established. Treatment is aimed at improving the clinical and psychological symptoms and reducing the treatment side effects; however, the best therapeutic option varies from case to case.\n\n\nCASE PRESENTATION\nThe present report described a case of gestational gigantomastia in a 30-year-old woman, gravida 2, parity 1, 17 week pregnant admitted to Pars Hospital, Tehran, Iran, on May 2014. The patient was admitted to hospital at week 17 of pregnancy, although her breasts initially had begun to enlarge from the first trimester. The patient developed hypercalcemia in her 32nd week of pregnancy. The present report followed this patient from diagnosis until the completion of treatment.\n\n\nCONCLUSION\nAlthough gestational gigantomastia is a rare condition, its timely prognosis and careful examination of some conditions like hyperprolactinemia and hypercalcemia is essential in successful management of this condition.", "title": "" }, { "docid": "d4a00f7d40f0d1f25869d7b15977b7df", "text": "We are living in the era of science and technology and it have infused with many aspects of our everyday life. With the advent of newer technologies the criminals have made full use of it which sometimes facade a challenging task to investigators such as forensic experts to catch the crime. This paper will discuss the need for computer forensics and application of technologies to be practiced in an effective and legal way, formalize basic technical issues, and point to references for further reading. It promotes the idea that the proficient practice of computer forensics and awareness of applicable laws is essential for today's networked organizations.", "title": "" }, { "docid": "6573162f8feacae5f121f69780534527", "text": "Larger fields in the Middle-size league as well as the effort to build mixed teams from different universities require a simulation environment which is capable to physically correctly simulate the robots and the environment. A standardized simulation environment has not yet been proposed for this league. In this paper we present our simulation environment, which is based on the Gazebo system. We show how typical Middle-size robots with features like omni-drives and omni-directional cameras can be modeled with relative ease. In particular, the control software for the real robots can be used with few changes, thus facilitating the transfer of results obtained in simulation back to the robots. We address some technical issues such as adapting time-triggered events in the robot control software to the simulation, and we introduce the concept of multi-level abstractions. The latter allows switching between faithful but computionally expensive sensor models and abstract but cheap approximations. These abstractions are needed especially when simulating whole teams of robots.", "title": "" }, { "docid": "cf5a4207d349bd7034112266e3a8684f", "text": "Smart grid uses intelligent transmission and distribution networks to deliver electricity. It aims to improve the electric system's reliability, security, and efficiency through two-way communication of consumption data and dynamic optimization of electric-system operations, maintenance, and planning. The smart grid systems use fine-grained power grid measurements to provide increased grid stability and reliability. Key to achieving this is securely sharing the measurements among grid entities over wide area networks. Typically, such sharing follows policies that depend on data generator and consumer preferences and on time-sensitive contexts. In smart grid, as well as the data, policies for sharing the data may be sensitive because they directly contain sensitive information, and reveal information about underlying data protected by the policy, or about the data owner or recipients. In this study, we propose an attribute-based data sharing scheme in smart grid. Not only the data but also the access policies are obfuscated in grid operators' point of view during the data sharing process. Thus, the data privacy and policy privacy are preserved in the proposed scheme. The access policy can be expressed with any arbitrary access formula. Thus, the expressiveness of the policy is enhanced. The security is also improved such that the unauthorized key generation center or the grid manage systems that store the data cannot decrypt the data to be shared. The computation overhead of recipients are also reduced by delegating most of the laborious decryption operations to the more powerful grid manage systems.", "title": "" }, { "docid": "764d6f45cd9dc08963a0e4d21b23d470", "text": "Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything.” With modules or processes for perception, working memory, episodic memories, “consciousness,” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA architecture is based on the LIDA cognitive cycle, a sort of “cognitive atom.” The more elementary cognitive modules and processes play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles. In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed.", "title": "" }, { "docid": "a324180129b78d853c035c2477f54a30", "text": "A book aiming to build a bridge between two fields that share the subject of research but do not share the same views necessarily puts itself in a difficult position: The authors have either to strike a fair balance at peril of dissatisfying both sides or nail their colors to the mast and cater mainly to one of two communities. For semantic processing of natural language with either NLP methods or Semantic Web approaches, the authors clearly favor the latter and propose a strictly ontology-driven interpretation of natural language. The main contribution of the book, driving semantic processing from the ground up by a formal domain-specific ontology, is elaborated in ten well-structured chapters spanning 143 pages of content.", "title": "" }, { "docid": "9adbbfb73f27d266f0ac975c784c22f1", "text": "Estonia has one of the most established e-voting systems in the world. Internet voting remote e-voting using the voter’s own equipment was piloted in 2005 [12] (with the first real elections using e-voting being conducted the same year) and has been in use ever since. So far, the Estonian internet voting system has been used for the whole country in three sets of local elections, two European Parliament elections and three parliamentary elections [5]. This chapter begins by exploring the voting system in Estonia; we consider the organisation of the electoral system in the three main kinds of election (municipal, parliamentary and European Parliament), the traditional ways of voting and the methods used to tally votes and elect candidates. Next we investigate the Estonian national ID card, an identity document that plays a key part in enabling internet voting to be possible in Estonia. After considering these pre-requisites, we describe the current internet voting system, including how it has evolved over time and the relatively new verification mechanisms that are available to voters. Next we discuss the assumptions and choices that have been made in the design of this system and the challenges and criticism that it has received. Finally, we conclude by discussing how the system has performed over the 10 years it has been in use, and the impact it appears to have had on voter turnout and satisfaction.", "title": "" }, { "docid": "c45d911aea9d06208a4ef273c9ab5ff3", "text": "A wide range of research has used face data to estimate a person's engagement, in applications from advertising to student learning. An interesting and important question not addressed in prior work is if face-based models of engagement are generalizable and context-free, or do engagement models depend on context and task. This research shows that context-sensitive face-based engagement models are more accurate, at least in the space of web-based tools for trauma recovery. Estimating engagement is important as various psychological studies indicate that engagement is a key component to measure the effectiveness of treatment and can be predictive of behavioral outcomes in many applications. In this paper, we analyze user engagement in a trauma-recovery regime during two separate modules/tasks: relaxation and triggers. The dataset comprises of 8M+ frames from multiple videos collected from 110 subjects, with engagement data coming from 800+ subject self-reports. We build an engagement prediction model as sequence learning from facial Action Units (AUs) using Long Short Term Memory (LSTMs). Our experiments demonstrate that engagement prediction is contextual and depends significantly on the allocated task. Models trained to predict engagement on one task are only weak predictors for another and are much less accurate than context-specific models. Further, we show the interplay of subject mood and engagement using a very short version of Profile of Mood States (POMS) to extend our LSTM model.", "title": "" }, { "docid": "43628e18a38d6cc9134fcf598eae6700", "text": "Purchase of dietary supplement products is increasing despite the lack of clinical evidence to support health needs for consumption. The purpose of this crosssectional study is to examine the factors influencing consumer purchase intention of dietary supplement products in Penang based on Theory of Planned Behaviour (TPB). 367 consumers were recruited from chain pharmacies and hypermarkets in Penang. From statistical analysis, the role of attitude differs from the original TPB model; attitude played a new role as the mediator in this dietary supplement products context. Findings concluded that subjective norms, importance of price and health consciousness affected dietary supplement products purchase intention indirectly through attitude formation, with 71.5% of the variance explained. Besides, significant differences were observed between dietary supplement products users and non-users in all variables. Dietary supplement product users have stronger intention to purchase dietary supplement products, more positive attitude, with stronger perceived social pressures to purchase, perceived more availability, place more importance of price and have higher level of health consciousness compared to nonusers. Therefore, in order to promote healthy living through natural ways, consumers’ attitude formation towards dietary supplement products should be the main focus. Policy maker, healthcare providers, educators, researchers and dietary supplement industry must be responsible and continue to work diligently to provide consumers with accurate dietary supplement products and healthy living information.", "title": "" }, { "docid": "a1ebca14dcf943116b2808b9d954f6f4", "text": "In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28].", "title": "" }, { "docid": "3a18976245cfc4b50e97aadf304ef913", "text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.", "title": "" }, { "docid": "c84a0f630b4fb2e547451d904e1c63a5", "text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.", "title": "" }, { "docid": "322d23354a9bf45146e4cb7c733bf2ec", "text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: brais.martinez@nottingham.ac.uk Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: michel.valstar@nottingham.ac.uk", "title": "" }, { "docid": "7cfc2866218223ba6bd56eb1f10ce29f", "text": "This paper deals with prediction of anopheles number, the main vector of malaria risk, using environmental and climate variables. The variables selection is based on an automatic machine learning method using regression trees, and random forests combined with stratified two levels cross validation. The minimum threshold of variables importance is accessed using the quadratic distance of variables importance while the optimal subset of selected variables is used to perform predictions. Finally the results revealed to be qualitatively better, at the selection, the prediction, and the CPU time point of view than those obtained by GLM-Lasso method.", "title": "" }, { "docid": "354bc052f75e7884baca157492f5004c", "text": "This paper is about how the SP theory of intelligence and its realization in the SP machine may, with advantage, be applied to the management and analysis of big data. The SP system-introduced in this paper and fully described elsewhere-may help to overcome the problem of variety in big data; it has potential as a universal framework for the representation and processing of diverse kinds of knowledge, helping to reduce the diversity of formalisms and formats for knowledge, and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualization of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it.", "title": "" }, { "docid": "e32db5353519de574b70e33a3498b695", "text": "Reinforcement Learning (RL) provides a promising new approach to systems performance management that differs radically from standard queuing-theoretic approaches making use of explicit system performance models. In principle, RL can automatically learn high-quality management policies without an explicit performance model or traffic model and with little or no built-in system specific knowledge. In our original work [1], [2], [3] we showed the feasibility of using online RL to learn resource valuation estimates (in lookup table form) which can be used to make high-quality server allocation decisions in a multi-application prototype Data Center scenario. The present work shows how to combine the strengths of both RL and queuing models in a hybrid approach in which RL trains offline on data collected while a queuing model policy controls the system. By training offline we avoid suffering potentially poor performance in live online training. We also now use RL to train nonlinear function approximators (e.g. multi-layer perceptrons) instead of lookup tables; this enables scaling to substantially larger state spaces. Our results now show that in both open-loop and closed-loop traffic, hybrid RL training can achieve significant performance improvements over a variety of initial model-based policies. We also find that, as expected, RL can deal effectively with both transients and switching delays, which lie outside the scope of traditional steady-state queuing theory.", "title": "" }, { "docid": "d076a7febd732eb6551c212670cc695a", "text": "In this paper, the concept of phase modulated MIMO radars is explained and demonstrated with a 28 nm CMOS fully integrated 79 GHz radar SoC. It includes two transmitters, two receivers, and the mm-wave frequency generation. The receivers’ outputs are digitized by on-chip ADCs and processed by a custom designed digital core, which performs correlation and accumulation with a pseudorandom sequence used in transmission. The SoC consumes 1 W to achieve 7.5 cm range resolution. A module with antennas allows for 5° resolution over ± 60° elevation and azimuth scan in 2×2 code domain MIMO operation. A 4×4 MIMO system is also demonstrated by means of two SoCs mounted on the same module.", "title": "" }, { "docid": "9cd57338305324351c26f6448f2f28ee", "text": "The factors involved in the pathogenesis of Crohn’s disease and ulcerative colitis, the two major types of inflammatory bowel disease (IBD) are summarized. Intestinal antigens composed of bacterial flora along with antigen presentation and impaired mucosal barrier have an important role in the initiation of IBD. The bacterial community may be modified by the use of antibiotics and probiotics. The dentritic cells recognize the antigens by cell surface Toll like receptor and the cytoplasmic CARD/NOD system. The balance between Th1/Th2/Th17 cell populations being the source of a variety of cytokines regulates the inflammatory mechanisms and the clearance of microbes. The intracellular killing and digestion, including autophagy, are important in the protection against microbes and their toxins. The homing process determines the location and distribution of the immune cells along the gut. All these players are potential targets of pharmacological manipulation of disease status.", "title": "" } ]
scidocsrr
76a5a76952894002e0ee7e28cba3cdcf
Shall I Compare Thee to a Machine-Written Sonnet? An Approach to Algorithmic Sonnet Generation
[ { "docid": "112a1483acf7fae119036ea231fcbe85", "text": "Part of the long lasting cultural heritage of China is the classical ancient Chinese poems which follow strict formats and complicated linguistic rules. Automatic Chinese poetry composition by programs is considered as a challenging problem in computational linguistics and requires high Artificial Intelligence assistance, and has not been well addressed. In this paper, we formulate the poetry composition task as an optimization problem based on a generative summarization framework under several constraints. Given the user specified writing intents, the system retrieves candidate terms out of a large poem corpus, and then orders these terms to fit into poetry formats, satisfying tonal and rhythm requirements. The optimization process under constraints is conducted via iterative term substitutions till convergence, and outputs the subset with the highest utility as the generated poem. For experiments, we perform generation on large datasets of 61,960 classic poems from Tang and Song Dynasty of China. A comprehensive evaluation, using both human judgments and ROUGE scores, has demonstrated the effectiveness of our proposed approach.", "title": "" }, { "docid": "faa60bb1166c83893fabf82c815b4820", "text": "We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms. The first approach uses a neural language model trained on a phonetic encoding to learn an implicit representation of both the form and content of English poetry. This model can effectively learn common poetic devices such as rhyme, rhythm and alliteration. The second approach considers poetry generation as a constraint satisfaction problem where a generative neural language model is tasked with learning a representation of content, and a discriminative weighted finite state machine constrains it on the basis of form. By manipulating the constraints of the latter model, we can generate coherent poetry with arbitrary forms and themes. A large-scale extrinsic evaluation demonstrated that participants consider machine-generated poems to be written by humans 54% of the time. In addition, participants rated a machinegenerated poem to be the most human-like amongst all evaluated.", "title": "" }, { "docid": "dd4820b9c90ea6e6bb4e40566396c0d1", "text": "Vision is a common source of inspiration for poetry. The objects and the sentimental imprints that one perceives from an image may lead to various feelings depending on the reader. In this paper, we present a system of poetry generation from images to mimic the process. Given an image, we first extract a few keywords representing objects and sentiments perceived from the image. These keywords are then expanded to related ones based on their associations in human written poems. Finally, verses are generated gradually from the keywords using recurrent neural networks trained on existing poems. Our approach is evaluated by human assessors and compared to other generation baselines. The results show that our method can generate poems that are more artistic than the baseline methods. This is one of the few attempts to generate poetry from images. By deploying our proposed approach, XiaoIce has already generated more than 12 million poems for users since its release in July 2017. A book of its poems has been published by Cheers Publishing, which claimed that the book is the first-ever poetry collection written by an AI in human history.", "title": "" }, { "docid": "d3069dbe4da6057d15cc0f7f6e5244cc", "text": "We take the generation of Chinese classical poem lines as a sequence-to-sequence learning problem, and build a novel system based on the RNN Encoder-Decoder structure to generate quatrains (Jueju in Chinese), with a topic word as input. Our system can jointly learn semantic meaning within a single line, semantic relevance among lines in a poem, and the use of structural, rhythmical and tonal patterns, without utilizing any constraint templates. Experimental results show that our system outperforms other competitive systems. We also find that the attention mechanism can capture the word associations in Chinese classical poetry and inverting target lines in training can improve", "title": "" }, { "docid": "517ec608208a669872a1d11c1d7836a3", "text": "Hafez is an automatic poetry generation system that integrates a Recurrent Neural Network (RNN) with a Finite State Acceptor (FSA). It generates sonnets given arbitrary topics. Furthermore, Hafez enables users to revise and polish generated poems by adjusting various style configurations. Experiments demonstrate that such “polish” mechanisms consider the user’s intention and lead to a better poem. For evaluation, we build a web interface where users can rate the quality of each poem from 1 to 5 stars. We also speed up the whole system by a factor of 10, via vocabulary pruning and GPU computation, so that adequate feedback can be collected at a fast pace. Based on such feedback, the system learns to adjust its parameters to improve poetry quality.", "title": "" } ]
[ { "docid": "955858709f4f623fda7f271b90689fe4", "text": "Empirical studies of variations in debt ratios across firms have analyzed important determinants of capital structure using statistical models. Researchers, however, rarely employ nonlinear models to examine the determinants and make little effort to identify a superior prediction model among competing ones. This paper reviews the time-series cross-sectional (TSCS) regression and the predictive abilities of neural network (NN) utilizing panel data concerning debt ratio of high-tech industries in Taiwan. We built models with these two methods using the same set of measurements as determinants of debt ratio and compared the forecasting performance of five models, namely, three TSCS regression models and two NN models. Models built with neural network obtained the lowest mean square error and mean absolute error. These results reveal that the relationships between debt ratio and determinants are nonlinear and that NNs are more competent in modeling and forecasting the test panel data. We conclude that NN models can be used to solve panel data analysis and forecasting problems.", "title": "" }, { "docid": "81f2f2ecc3b408259c1d30e6dcde9ed8", "text": "A range of new datacenter switch designs combine wireless or optical circuit technologies with electrical packet switching to deliver higher performance at lower cost than traditional packet-switched networks. These \"hybrid\" networks schedule large traffic demands via a high-rate circuits and remaining traffic with a lower-rate, traditional packet-switches. Achieving high utilization requires an efficient scheduling algorithm that can compute proper circuit configurations and balance traffic across the switches. Recent proposals, however, provide no such algorithm and rely on an omniscient oracle to compute optimal switch configurations.\n Finding the right balance of circuit and packet switch use is difficult: circuits must be reconfigured to serve different demands, incurring non-trivial switching delay, while the packet switch is bandwidth constrained. Adapting existing crossbar scheduling algorithms proves challenging with these constraints. In this paper, we formalize the hybrid switching problem, explore the design space of scheduling algorithms, and provide insight on using such algorithms in practice. We propose a heuristic-based algorithm, Solstice that provides a 2.9× increase in circuit utilization over traditional scheduling algorithms, while being within 14% of optimal, at scale.", "title": "" }, { "docid": "522938687849ccc9da8310ac9d6bbf9e", "text": "Machine learning models, especially Deep Neural Networks, are vulnerable to adversarial examples—malicious inputs crafted by adding small noises to real examples, but fool the models. Adversarial examples transfer from one model to another, enabling black-box attacks to real-world applications. In this paper, we propose a strong attack algorithm named momentum iterative fast gradient sign method (MI-FGSM) to discover adversarial examples. MI-FGSM is an extension of iterative fast gradient sign method (I-FGSM) but improves the transferability significantly. Besides, we study how to attack an ensemble of models efficiently. Experiments demonstrate the effectiveness of the proposed algorithm. We hope that MI-FGSM can serve as a benchmark attack algorithm for evaluating the robustness of various models and defense methods.", "title": "" }, { "docid": "6b77d96528da3152fec757928b767d31", "text": "3D interfaces use motion sensing, physical input, and spatial interaction techniques to effectively control highly dynamic virtual content. Now, with the advent of the Nintendo Wii, Sony Move, and Microsoft Kinect, game developers and researchers must create compelling interface techniques and game-play mechanics that make use of these technologies. At the same time, it is becoming increasingly clear that emerging game technologies are not just going to change the way we play games, they are also going to change the way we make and view art, design new products, analyze scientific datasets, and more.\n This introduction to 3D spatial interfaces demystifies the workings of modern videogame motion controllers and provides an overview of how it is used to create 3D interfaces for tasks such as 2D and 3D navigation, object selection and manipulation, and gesture-based application control. Topics include the strengths and limitations of various motion-controller sensing technologies in today's peripherals, useful techniques for working with these devices, and current and future applications of these technologies to areas beyond games. The course presents valuable information on how to utilize existing 3D user-interface techniques with emerging technologies, how to develop interface techniques, and how to learn from the successes and failures of spatial interfaces created for a variety of application domains.", "title": "" }, { "docid": "96b1688b19bf71e8f1981d9abe52fc2c", "text": "Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) – specifically “How?” and “Why?” questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.", "title": "" }, { "docid": "35c18e570a6ab44090c1997e7fe9f1b4", "text": "Online information maintenance through cloud applications allows users to store, manage, control and share their information with other users as well as Cloud service providers. There have been serious privacy concerns about outsourcing user information to cloud servers. But also due to an increasing number of cloud data security incidents happened in recent years. Proposed system is a privacy-preserving system using Attribute based Multifactor Authentication. Proposed system provides privacy to users data with efficient authentication and store them on cloud servers such that servers do not have access to sensitive user information. Meanwhile users can maintain full control over access to their uploaded ?les and data, by assigning ?ne-grained, attribute-based access privileges to selected files and data, while di?erent users can have access to di?erent parts of the System. This application allows clients to set privileges to different users to access their data.", "title": "" }, { "docid": "5b6a73103e7310de86c37185c729b8d9", "text": "Motion segmentation is currently an active area of research in computer Vision. The task of comparing different methods of motion segmentation is complicated by the fact that researchers may use subtly different definitions of the problem. Questions such as ”Which objects are moving?”, ”What is background?”, and ”How can we use motion of the camera to segment objects, whether they are static or moving?” are clearly related to each other, but lead to different algorithms, and imply different versions of the ground truth. This report has two goals. The first is to offer a precise definition of motion segmentation so that the intent of an algorithm is as welldefined as possible. The second is to report on new versions of three previously existing data sets that are compatible with this definition. We hope that this more detailed definition, and the three data sets that go with it, will allow more meaningful comparisons of certain motion segmentation methods.", "title": "" }, { "docid": "869e01855c8cfb9dc3e64f7f3e73cd60", "text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.", "title": "" }, { "docid": "9a454ccc77edb739a327192dafd5d974", "text": "In the present time, due to attractive features of cloud computing, the massive amount of data has been stored in the cloud. Though cloud-based services offer many benefits but privacy and security of the sensitive data is a big issue. These issues are resolved by storing sensitive data in encrypted form. Encrypted storage protects the data against unauthorized access, but it weakens some basic and important functionality like search operation on the data, i.e. searching the required data by the user on the encrypted data requires data to be decrypted first and then search, so this eventually, slows down the process of searching. To achieve this many encryption schemes have been proposed, however, all of the schemes handle exact Query matching but not Similarity matching. While user uploads the file, features are extracted from each document. When the user fires a query, trapdoor of that query is generated and search is performed by finding the correlation among documents stored on cloud and query keyword, using Locality Sensitive Hashing.", "title": "" }, { "docid": "1cdee228f9813e4f33df1706ec4e7876", "text": "Existing methods on sketch based image retrieval (SBIR) are usually based on the hand-crafted features whose ability of representation is limited. In this paper, we propose a sketch based image retrieval method via image-aided cross domain learning. First, the deep learning model is introduced to learn the discriminative features. However, it needs a large number of images to train the deep model, which is not suitable for the sketch images. Thus, we propose to extend the sketch training images via introducing the real images. Specifically, we initialize the deep models with extra image data, and then extract the generalized boundary from real images as the sketch approximation. The using of generalized boundary is under the assumption that their domain is similar with sketch domain. Finally, the neural network is fine-tuned with the sketch approximation data. Experimental results on Flicker15 show that the proposed method has a strong ability to link the associated image-sketch pairs and the results outperform state-of-the-arts methods.", "title": "" }, { "docid": "5267441df39432707e5c3a4616ba1413", "text": "Many investigators have detailed the soft tissue anatomy of the face. Despite the broad reference base, confusion remains about the consistent nature of the fascial anatomy of the craniofacial soft tissue envelope in relation to the muscular, neurovascular and specialised structures. This confusion is compounded by the lack of consistent terminology. This study presents a coherent account of the fascial planes of the temple and midface. Ten fresh cadaveric facial halves were dissected, in a level-by-level approach, to display the fascial anatomy of the midface and temporal region. The contralateral 10 facial halves were coronally sectioned through the zygomatic arch at a consistent point anterior to the tragus. These sections were histologically prepared to demonstrate the fascial anatomy en-bloc with the skeletal and specialised soft tissues. Three generic subcutaneous fascial layers consistently characterise the face and temporal regions, and remain in continuity across the zygomatic arch. These three layers are the superficial musculo-aponeurotic system (SMAS), the innominate fascia, and the muscular fasciae. The many inconsistent names previously given to these layers reflect their regional specialisation in the temple, zygomatic area, and midface. Appreciation of the consistency of these layers, which are in continuity with the layers of the scalp, greatly facilitates an understanding of applied craniofacial soft tissue anatomy.", "title": "" }, { "docid": "57c8b69c18b5b2c38552295f8e8789d5", "text": "In many safety-critical applications such as autonomous driving and surgical robots, it is desirable to obtain prediction uncertainties from object detection modules to help support safe decision-making. Specifically, such modules need to estimate the probability of each predicted object in a given region and the confidence interval for its bounding box. While recent Bayesian deep learning methods provide a principled way to estimate this uncertainty, the estimates for the bounding boxes obtained using these methods are uncalibrated. In this paper, we address this problem for the single-object localization task by adapting an existing technique for calibrating regression models. We show, experimentally, that the resulting calibrated model obtains more reliable uncertainty estimates.", "title": "" }, { "docid": "be866036f5ae430d6dd46cdd1d9319dd", "text": "In this contribution an integrated HPA-DPDT for next generation AESA TRMs is presented. The proposed circuit relies on a concurrent design technique merging switches and HPA matching network. Realized MMIC features a 3×5mm2 outline operating in the 6-18 GHz band with a typical output power of 2W, an associated PAE of 13% and 3dB insertion loss in RX mode.", "title": "" }, { "docid": "d8de391287150bf580c8d613000d5b84", "text": "3D integration consists of 3D IC packaging, 3D IC integration, and 3D Si integration. They are different and in general the TSV (through-silicon via) separates 3D IC packaging from 3D IC/Si integrations since the latter two use TSV but 3D IC packaging does not. TSV (with a new concept that every chip or interposer could have two surfaces with circuits) is the heart of 3D IC/Si integrations and is the focus of this investigation. The origin of 3D integration is presented. Also, the evolution, challenges, and outlook of 3D IC/Si integrations are discussed as well as their road maps are presented. Finally, a few generic, low-cost, and thermal-enhanced 3D IC integration system-in-packages (SiPs) with various passive TSV interposers are proposed.", "title": "" }, { "docid": "2ca0c604b449e1495bd57d96381e0e1f", "text": "The data ̄ow program graph execution model, or data ̄ow for short, is an alternative to the stored-program (von Neumann) execution model. Because it relies on a graph representation of programs, the strengths of the data ̄ow model are very much the complements of those of the stored-program one. In the last thirty or so years since it was proposed, the data ̄ow model of computation has been used and developed in very many areas of computing research: from programming languages to processor design, and from signal processing to recon®gurable computing. This paper is a review of the current state-of-the-art in the applications of the data ̄ow model of computation. It focuses on three areas: multithreaded computing, signal processing and recon®gurable computing. Ó 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "2c2be931e456761824920fcc9e4666ec", "text": "The resource description framework (RDF) is a metadata model and language recommended by the W3C. This paper presents a framework to incorporate temporal reasoning into RDF, yielding temporal RDF graphs. We present a semantics for these kinds of graphs which includes the notion of temporal entailment and a syntax to incorporate this framework into standard RDF graphs, using the RDF vocabulary plus temporal labels. We give a characterization of temporal entailment in terms of RDF entailment and show that the former does not yield extra asymptotic complexity with respect to nontemporal RDF graphs. We also discuss temporal RDF graphs with anonymous timestamps, providing a theoretical framework for the study of temporal anonymity. Finally, we sketch a temporal query language for RDF, along with complexity results for query evaluation that show that the time dimension preserves the tractability of answers", "title": "" }, { "docid": "b9c40aa4c8ac9d4b6cbfb2411c542998", "text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.", "title": "" }, { "docid": "6902e1604957fa21adbe90674bf5488d", "text": "State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.", "title": "" }, { "docid": "b853f492667d4275295c0228566f4479", "text": "This study reports spore germination, early gametophyte development and change in the reproductive phase of Drynaria fortunei, a medicinal fern, in response to changes in pH and light spectra. Germination of D. fortunei spores occurred on a wide range of pH from 3.7 to 9.7. The highest germination (63.3%) occurred on ½ strength Murashige and Skoog basal medium supplemented with 2% sucrose at pH 7.7 under white light condition. Among the different light spectra tested, red, far-red, blue, and white light resulted in 71.3, 42.3, 52.7, and 71.0% spore germination, respectively. There were no morphological differences among gametophytes grown under white and blue light. Elongated or filamentous but multiseriate gametophytes developed under red light, whereas under far-red light gametophytes grew as uniseriate filaments consisting of mostly elongated cells. Different light spectra influenced development of antheridia and archegonia in the gametophytes. Gametophytes gave rise to new gametophytes and developed antheridia and archegonia after they were transferred to culture flasks. After these gametophytes were transferred to plastic tray cells with potting mix of tree fern trunk fiber mix (TFTF mix) and peatmoss the highest number of sporophytes was found. Sporophytes grown in pots developed rhizomes.", "title": "" } ]
scidocsrr
db01634ad7cfb96719323ef5b1cedf2b
Learning and Game AI
[ { "docid": "a583c568e3c2184e5bda272422562a12", "text": "Video games are primarily designed for the players. However, video game spectating is also a popular activity, boosted by the rise of online video sites and major gaming tournaments. In this paper, we focus on the spectator, who is emerging as an important stakeholder in video games. Our study focuses on Starcraft, a popular real-time strategy game with millions of spectators and high level tournament play. We have collected over a hundred stories of the Starcraft spectator from online sources, aiming for as diverse a group as possible. We make three contributions using this data: i) we find nine personas in the data that tell us who the spectators are and why they spectate; ii) we strive to understand how different stakeholders, like commentators, players, crowds, and game designers, affect the spectator experience; and iii) we infer from the spectators' expressions what makes the game entertaining to watch, forming a theory of distinct types of information asymmetry that create suspense for the spectator. One design implication derived from these findings is that, rather than presenting as much information to the spectator as possible, it is more important for the stakeholders to be able to decide how and when they uncover that information.", "title": "" } ]
[ { "docid": "c8b1a0d5956ced6deaefe603efc523ba", "text": "What can wearable sensors and usage of smart phones tell us about academic performance, self-reported sleep quality, stress and mental health condition? To answer this question, we collected extensive subjective and objective data using mobile phones, surveys, and wearable sensors worn day and night from 66 participants, for 30 days each, totaling 1,980 days of data. We analyzed daily and monthly behavioral and physiological patterns and identified factors that affect academic performance (GPA), Pittsburg Sleep Quality Index (PSQI) score, perceived stress scale (PSS), and mental health composite score (MCS) from SF-12, using these month-long data. We also examined how accurately the collected data classified the participants into groups of high/low GPA, good/poor sleep quality, high/low self-reported stress, high/low MCS using feature selection and machine learning techniques. We found associations among PSQI, PSS, MCS, and GPA and personality types. Classification accuracies using the objective data from wearable sensors and mobile phones ranged from 67-92%.", "title": "" }, { "docid": "ce1048eb76d48800b4e455b8e5d3342a", "text": "While it is true that successful implementation of an enterprise resource planning (ERP) system is a task of Herculean proportions, it is not impossible. If your organization is to reap the benefits of ERP, it must first develop a plan for success. But “prepare to see your organization reengineered, your staff disrupted, and your productivity drop before the payoff is realized.”1 Implementing ERP must be viewed and undertaken as a new business endeavor and a team mission, not just a software installation. Companies must involve all employees, and unconditionally and completely sell them on the concept of ERP for it to be a success.2 A successful implementation means involving, supervising, recognizing, and retaining those who have worked or will work closely with the system. Without a team attitude and total backing by everyone involved, an ERP implementation will end in less than an ideal situation.3 This was the situation for a soft drink bottler that tried to cut corners and did not recognize the importance of the people so heavily involved and depended on.", "title": "" }, { "docid": "e84ca42f96cca0fe3ed7c70d90554a8d", "text": "While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars. Unfortunately, because of language barrier, some scientists (especially the junior ones or graduate students who do not master other languages) cannot efficiently locate the publications hosted in a foreign language repository. In this study, we propose a novel solution, cross-language citation recommendation via Hierarchical Representation Learning on Heterogeneous Graph (HRLHG), to address this new problem. HRLHG can learn a representation function by mapping the publications, from multilingual repositories, to a low-dimensional joint embedding space from various kinds of vertexes and relations on a heterogeneous graph. By leveraging both global (task specific) plus local (task independent) information as well as a novel supervised hierarchical random walk algorithm, the proposed method can optimize the publication representations by maximizing the likelihood of locating the important cross-language neighborhoods on the graph. Experiment results show that the proposed method can not only outperform state-of-the-art baseline models, but also improve the interpretability of the representation model for cross-language citation recommendation task.", "title": "" }, { "docid": "aac17c2c975afaa3f55e42e698d398b3", "text": "Many state-of-the-art Large Vocabulary Continuous Speech Recognition (LVCSR) Systems are hybrids of neural networks and Hidden Markov Models (HMMs). Recently, more direct end-to-end methods have been investigated, in which neural architectures were trained to model sequences of characters [1,2]. To our knowledge, all these approaches relied on Connectionist Temporal Classification [3] modules. We investigate an alternative method for sequence modelling based on an attention mechanism that allows a Recurrent Neural Network (RNN) to learn alignments between sequences of input frames and output labels. We show how this setup can be applied to LVCSR by integrating the decoding RNN with an n-gram language model and by speeding up its operation by constraining selections made by the attention mechanism and by reducing the source sequence lengths by pooling information over time. Recognition accuracies similar to other HMM-free RNN-based approaches are reported for the Wall Street Journal corpus.", "title": "" }, { "docid": "3e66421e80bfc22f592ffbd6254b1951", "text": "This paper presents a system which extends the use of the traditional white cane by the blind for navigation purposes in indoor environments. Depth data of the scene in front of the user is acquired using the Microsoft Kinect sensor which is then mapped into a pattern representation. Using neural networks, the proposed system uses this information to extract relevant features from the scene, enabling the detection of possible obstacles along the way. The results show that the neural network is able to correctly classify the type of pattern presented as input.", "title": "" }, { "docid": "934bc45566dfa5199084a4f804513a9f", "text": "Correct architecture is the backbone of the successful software. To address the complexity of the growing software there are different architectural models that are designed to handle this problem. The most important thing is to differentiate software architecture from software design. As the web based applications are developed under tight schedule and in quickly changing environment, the developers have to face different problematical situations. Therefore understanding of the components of architectures, specially designed for web based applications is crucial to overcome these challenging situations. The purpose of this paper is to emphasize on possible architectural solutions for web based applications. Different types of software architectures that are based on different architectural styles are compared according to the nature of software. Keyword: Component based architecture, Layered architecture, Service oriented architecture, Web applications.", "title": "" }, { "docid": "ffede4ad022d6ea4006c2e123807e89f", "text": "Awareness about the energy consumption of appliances can help to save energy in households. Non-intrusive Load Monitoring (NILM) is a feasible approach to provide consumption feedback at appliance level. In this paper, we evaluate a broad set of features for electrical appliance recognition, extracted from high frequency start-up events. These evaluations were applied on several existing high frequency energy datasets. To examine clean signatures, we ran all experiments on two datasets that are based on isolated appliance events; more realistic results were retrieved from two real household datasets. Our feature set consists of 36 signatures from related work including novel approaches, and from other research fields. The results of this work include a stand-alone feature ranking, promising feature combinations for appliance recognition in general and appliance-wise performances.", "title": "" }, { "docid": "e2ce393fade02f0dfd20b9aca25afd0f", "text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.", "title": "" }, { "docid": "a4c739a3b4d6adbb907568c7fdc85d9d", "text": "This paper describes about implementation of speech recognition system on a mobile robot for controlling movement of the robot. The methods used for speech recognition system are Linear Predictive Coding (LPC) and Artificial Neural Network (ANN). LPC method is used for extracting feature of a voice signal and ANN is used as the recognition method. Backpropagation method is used to train the ANN. Voice signals are sampled directly from the microphone and then they are processed using LPC method for extracting the features of voice signal. For each voice signal, LPC method produces 576 data. Then, these data become the input of the ANN. The ANN was trained by using 210 data training. This data training includes the pronunciation of the seven words used as the command, which are created from 30 different people. Experimental results show that the highest recognition rate that can be achieved by this system is 91.4%. This result is obtained by using 25 samples per word, 1 hidden layer, 5 neurons for each hidden layer, and learning rate 0.1.", "title": "" }, { "docid": "212619e09ee7dfe0f32d90e2da25c8f0", "text": "This paper tackles anomaly detection in videos, which is an extremely challenging task because anomaly is unbounded. We approach this task by leveraging a Convolutional Neural Network (CNN or ConvNet) for appearance encoding for each frame, and leveraging a Convolutional Long Short Term Memory (ConvLSTM) for memorizing all past frames which corresponds to the motion information. Then we integrate ConvNet and ConvLSTM with Auto-Encoder, which is referred to as ConvLSTM-AE, to learn the regularity of appearance and motion for the ordinary moments. Compared with 3D Convolutional Auto-Encoder based anomaly detection, our main contribution lies in that we propose a ConvLSTM-AE framework which better encodes the change of appearance and motion for normal events, respectively. To evaluate our method, we first conduct experiments on a synthesized Moving-MNIST dataset under controlled settings, and results show that our method can easily identify the change of appearance and motion. Extensive experiments on real anomaly datasets further validate the effectiveness of our method for anomaly detection.", "title": "" }, { "docid": "056944e9e568d69d5caa707d03353f62", "text": "Cyberbullying has emerged as a new form of antisocial behaviour in the context of online communication over the last decade. The present study investigates potential longitudinal risk factors for cyberbullying. A total of 835 Swiss seventh graders participated in a short-term longitudinal study (two assessments 6 months apart). Students reported on the frequency of cyberbullying, traditional bullying, rule-breaking behaviour, cybervictirnisation, traditional victirnisation, and frequency of online communication (interpersonal characteristics). In addition, we assessed moral disengagement, empathic concern, and global self-esteem (intrapersonal characteristics). Results showed that traditional bullying, rule-breaking behaviour, and frequency of online communication are longitudinal risk factors for involvement in cyberbullying as a bully. Thus, cyberbullying is strongly linked to real-world antisocial behaviours. Frequent online communication may be seen as an exposure factor that increases the likelihood of engaging in cyberbullying. In contrast, experiences of victimisation and intrapersonal characteristics were not found to increase the longitudinal risk for cyberbullying over and above antisocial behaviour and frequency of online communication. Implications of the findings for the prevention of cyberbullying are discussed. Copyright © 2012 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "64efd590a51fc3cab97c9b4b17ba9b40", "text": "The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper, we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text. Another contribution that we make is proposing a technique based on synthetic minority oversampling to generate a large labeled dataset, suitable for deep nets training, from a minimal amount of labeled data (roughly 3,000 examples of sophisticated Twitter bots). We demonstrate that, from just one single tweet, our architecture can achieve high classification accuracy (AUC > 96%) in separating bots from humans. We apply the same architecture to account-level bot detection, achieving nearly perfect classification accuracy (AUC > 99%). Our system outperforms previous state of the art while leveraging a small and interpretable set of features yet requiring minimal training data.", "title": "" }, { "docid": "b31aaa6805524495f57a2f54d0dd86f1", "text": "CLINICAL HISTORY A 54-year-old white female was seen with a 10-year history of episodes of a burning sensation of the left ear. The episodes are preceded by nausea and a hot feeling for about 15 seconds and then the left ear becomes visibly red for an average of about 1 hour, with a range from about 30 minutes to 2 hours. About once every 2 years, she would have a flurry of episodes occurring over about a 1-month period during which she would average about five episodes with a range of 1 to 6. There was also an 18-year history of migraine without aura occurring about once a year. At the age of 36 years, she developed left-sided pulsatile tinnitus. A cerebral arteriogram revealed a proximal left internal carotid artery occlusion of uncertain etiology after extensive testing. An MRI scan at the age of 45 years was normal. Neurological examination was normal. A carotid ultrasound study demonstrated complete occlusion of the left internal carotid artery and a normal right. Question.—What is the diagnosis?", "title": "" }, { "docid": "a64a83791259350d5d76dc1ea097a7fb", "text": "Today the channels for expressing opinions seem to increase daily. When these opinions are relevant to a company, they are important sources of business insight, whether they represent critical intelligence about a customer's defection risk, the impact of an influential reviewer on other people's purchase decisions, or early feedback on product releases, company news or competitors. Capturing and analyzing these opinions is a necessity for proactive product planning, marketing and customer service and it is also critical in maintaining brand integrity. The importance of harnessing opinion is growing as consumers use technologies such as Twitter to express their views directly to other consumers. Tracking the disparate sources of opinion is hard - but even harder is quickly and accurately extracting the meaning so companies can analyze and act. Tweets' Language is complicated and contextual, especially when people are expressing opinions and requires reliable sentiment analysis based on parsing many linguistic shades of gray. This article argues that using the R programming platform for analyzing tweets programmatically simplifies the task of sentiment analysis and opinion mining. An R programming technique has been used for testing different sentiment lexicons as well as different scoring schemes. Experiments on analyzing the tweets of users over six NHL hockey teams reveals the effectively of using the opinion lexicon and the Latent Dirichlet Allocation (LDA) scoring scheme.", "title": "" }, { "docid": "c77c6ea404d9d834ef1be5a1d7222e66", "text": "We introduce the concepts of regular and totally regular bipolar fuzzy graphs. We prove necessary and sufficient condition under which regular bipolar fuzzy graph and totally bipolar fuzzy graph are equivalent. We introduce the notion of bipolar fuzzy line graphs and present some of their properties. We state a necessary and sufficient condition for a bipolar fuzzy graph to be isomorphic to its corresponding bipolar fuzzy line graph. We examine when an isomorphism between two bipolar fuzzy graphs follows from an isomorphism of their corresponding bipolar fuzzy line graphs.", "title": "" }, { "docid": "88d226d5b10a044a4c368a0a6136e421", "text": "The areas of machine learning and communication technology are converging. Today’s communications systems generate a huge amount of traffic data, which can help to significantly enhance the design and management of networks and communication components when combined with advanced machine learning methods. Furthermore, recently developed end-to-end training procedures offer new ways to jointly optimize the components of a communication system. Also in many emerging application fields of communication technology, e.g., smart cities or internet of things, machine learning methods are of central importance. This paper gives an overview over the use of machine learning in different areas of communications and discusses two exemplar applications in wireless networking. Furthermore, it identifies promising future research topics and discusses their potential impact.", "title": "" }, { "docid": "85462fe3cf060d7fa85251d5a7d30d1a", "text": "Validity of PostureScreen Mobile® in the Measurement of Standing Posture Breanna Cristine Berry Hopkins Department of Exercise Sciences, BYU Master of Science Background: PostureScreen Mobile® is an app created to quickly screen posture using front and side-view photographs. There is currently a lack of evidence that establishes PostureScreen Mobile® (PSM) as a valid measure of posture. Therefore, the purpose of this preliminary study was to document the validity and reliability of PostureScreen Mobile® in assessing static standing posture. Methods: This study was an experimental trial in which the posture of 50 male participants was assessed a total of six times using two different methods: PostureScreen Mobile® and Vicon 3D motion analysis system (VIC). Postural deviations, as measured during six trials of PSM assessments (3 trials with and 3 trials without anatomical markers), were compared to the postural deviations as measured using the VIC as the criterion measure. Measurement of lateral displacement on the x-axis (shift) and rotation on the y-axis (tilt) were made of the head, shoulders, and hips in the frontal plane. Measurement of forward/rearward displacement on the Z-axis (shift) of the head, shoulders, hips, and knees were made in the sagittal plane. Validity was evaluated by comparing the PSM measurements of shift and tilt of each body part to that of the VIC. Reliability was evaluated by comparing the variance of PSM measurements to the variance of VIC measurements. The statistical model employed the Bayesian framework and consisted of the scaled product of the likelihood of the data given the parameters and prior probability densities for each of the parameters. Results: PSM tended to overestimate VIC postural tilt and shift measurements in the frontal plane and underestimate VIC postural shift measurements in the sagittal plane. Use of anatomical markers did not universally improve postural measurements with PSM, and in most cases, the variance of postural measurements using PSM exceeded that of VIC. The patterns in the intraclass correlation coefficients (ICC) suggest high trial-to-trial variation in posture. Conclusions: We conclude that until research further establishes the validity and reliability of the PSM app, it should not be used in research or clinical applications when accurate postural assessments are necessary or when serial measurements of posture will be performed. We suggest that the PSM be used by health and fitness professionals as a screening tool, as described by the manufacturer. Due to the suspected trial-to-trial variation in posture, we question the usefulness of a single postural assessment.", "title": "" }, { "docid": "fa005ff6f8f59517f10a5c9808e6549d", "text": "Traditional approaches to simultaneous localization and mapping (SLAM) rely on low-level geometric features such as points, lines, and planes. They are unable to assign semantic labels to landmarks observed in the environment. Furthermore, loop closure recognition based on low-level features is often viewpoint-dependent and subject to failure in ambiguous or repetitive environments. On the other hand, object recognition methods can infer landmark classes and scales, resulting in a small set of easily recognizable landmarks, ideal for view-independent unambiguous loop closure. In a map with several objects of the same class, however, a crucial data association problem exists. While data association and recognition are discrete problems usually solved using discrete inference, classical SLAM is a continuous optimization over metric information. In this paper, we formulate an optimization problem over sensor states and semantic landmark positions that integrates metric information, semantic information, and data associations, and decompose it into two interconnected problems: an estimation of discrete data association and landmark class probabilities, and a continuous optimization over the metric states. The estimated landmark and robot poses affect the association and class distributions, which in turn affect the robot-landmark pose optimization. The performance of our algorithm is demonstrated on indoor and outdoor datasets.", "title": "" }, { "docid": "f0093159ff25b3c19e9c48d9c09bcad5", "text": "This article discusses the radiographic manifestation of jaw lesions whose etiology may be traced to underlying systemic disease. Some changes may be related to hematologic or metabolic disorders. A group of bone changes may be associated with disorders of the endocrine system. It is imperative for the clinician to compare the constantly changing and dynamic maxillofacial skeleton to the observed radiographic pathology as revealed on intraoral and extraoral imagery.", "title": "" }, { "docid": "53b32cdb6c3d511180d8cb194c286ef5", "text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.", "title": "" } ]
scidocsrr
568c7e5bc4f47c8bf8a0414f32f4bb13
Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models
[ { "docid": "f9823fc9ac0750cc247cfdbf0064c8b5", "text": "Scene segmentation is a challenging task as it need label every pixel in the image. It is crucial to exploit discriminative context and aggregate multi-scale features to achieve better segmentation. In this paper, we first propose a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context. The proposed context contrasted local feature greatly improves the parsing performance, especially for inconspicuous objects and background stuff. Furthermore, we propose a scheme of gated sum to selectively aggregate multi-scale features for each spatial position. The gates in this scheme control the information flow of different scale features. Their values are generated from the testing image by the proposed network learnt from the training data so that they are adaptive not only to the training data, but also to the specific testing image. Without bells and whistles, the proposed approach achieves the state-of-the-arts consistently on the three popular scene segmentation datasets, Pascal Context, SUN-RGBD and COCO Stuff.", "title": "" }, { "docid": "ba29af46fd410829c450eed631aa9280", "text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.", "title": "" }, { "docid": "4301af5b0c7910480af37f01847fb1fe", "text": "Cross-modal retrieval is a very hot research topic that is imperative to many applications involving multi-modal data. Discovering an appropriate representation for multi-modal data and learning a ranking function are essential to boost the cross-media retrieval. Motivated by the assumption that a compositional cross-modal semantic representation (pairs of images and text) is more attractive for cross-modal ranking, this paper exploits the existing image-text databases to optimize a ranking function for cross-modal retrieval, called deep compositional cross-modal learning to rank (C2MLR). In this paper, C2MLR considers learning a multi-modal embedding from the perspective of optimizing a pairwise ranking problem while enhancing both local alignment and global alignment. In particular, the local alignment (i.e., the alignment of visual objects and textual words) and the global alignment (i.e., the image-level and sentence-level alignment) are collaboratively utilized to learn the multi-modal embedding common space in a max-margin learning to rank manner. The experiments demonstrate the superiority of our proposed C2MLR due to its nature of multi-modal compositional embedding.", "title": "" } ]
[ { "docid": "c122a50d90e9f4834f36a19ba827fa9f", "text": "Cancers are able to grow by subverting immune suppressive pathways, to prevent the malignant cells as being recognized as dangerous or foreign. This mechanism prevents the cancer from being eliminated by the immune system and allows disease to progress from a very early stage to a lethal state. Immunotherapies are newly developing interventions that modify the patient's immune system to fight cancer, by either directly stimulating rejection-type processes or blocking suppressive pathways. Extracellular adenosine generated by the ectonucleotidases CD39 and CD73 is a newly recognized \"immune checkpoint mediator\" that interferes with anti-tumor immune responses. In this review, we focus on CD39 and CD73 ectoenzymes and encompass aspects of the biochemistry of these molecules as well as detailing the distribution and function on immune cells. Effects of CD39 and CD73 inhibition in preclinical and clinical studies are discussed. Finally, we provide insights into potential clinical application of adenosinergic and other purinergic-targeting therapies and forecast how these might develop in combination with other anti-cancer modalities.", "title": "" }, { "docid": "d18ed4c40450454d6f517c808da7115a", "text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.", "title": "" }, { "docid": "146c58e49221a9e8f8dbcdc149737924", "text": "Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.", "title": "" }, { "docid": "25739e04a42f7309127596846d9eefa3", "text": "We consider a new formulation of abduction. Our formulation differs from the existing approaches in that it does not cast the “plausibility” of explanations in terms of either syntactic minimality or an explicitly given prior distribution. Instead, “plausibility,” along with the rules of the domain, is learned from concrete examples (settings of attributes). Our version of abduction thus falls in the “learning to reason” framework of Khardon and Roth. Such approaches enable us to capture a natural notion of “plausibility” in a domain while avoiding the problem of specifying an explicit representation of what is “plausible,” a task that humans find extremely difficult. In this work, we specifically consider the question of which syntactic classes of formulas have efficient algorithms for abduction. It turns out that while the representation of the query is irrelevant to the computational complexity of our problem, the representation of the explanation critically affects its tractability. We find that the class of k-DNF explanations can be found in polynomial time for any fixed k; but, we also find evidence that even very weak versions of our abduction task are intractable for the usual class of conjunctive explanations. This evidence is provided by a connection to the usual, inductive PAC-learning model proposed by Valiant. We also briefly consider an exception-tolerant variant of abduction. We observe that it is possible for polynomial-time algorithms to tolerate a few adversarially chosen exceptions, again for the class of kDNF explanations. All of the algorithms we study are particularly simple, and indeed are variants of a rule proposed by Mill.", "title": "" }, { "docid": "7858a5855b7a8420f74bb3af064c31ed", "text": "Current technologies for searching scientific litearture do not support answering many queries that could significantly improve the day-to-day activities of a researcher. For instance, a Machine Translation (MT) researcher would like to answer questions such as: • Which are the best published results reported on the NIST-09 Chinese dataset? • What are the most important methods for speeding up phrase-based decoding? • Are there papers showing that a neural translation model is better than a non-neural? Current methods cannot yet infer the main elements of experiments reported in papers; there is no consenus on what the elements and the relations between them should be.", "title": "" }, { "docid": "460fd722b6dffdb78ce8696f801cf02d", "text": "Clustered regularly interspaced short palindromic repeats (CRISPR) are a distinctive feature of the genomes of most Bacteria and Archaea and are thought to be involved in resistance to bacteriophages. We found that, after viral challenge, bacteria integrated new spacers derived from phage genomic sequences. Removal or addition of particular spacers modified the phage-resistance phenotype of the cell. Thus, CRISPR, together with associated cas genes, provided resistance against phages, and resistance specificity is determined by spacer-phage sequence similarity.", "title": "" }, { "docid": "98cfdc1fb3c957283eb62470376edf82", "text": "In this paper we present the MDA framework (standing for Mechanics, Dynamics, and Aesthetics), developed and taught as part of the Game Design and Tuning Workshop at the Game Developers Conference, San Jose 2001-2004. MDA is a formal approach to understanding games – one which attempts to bridge the gap between game design and development, game criticism, and technical game research. We believe this methodology will clarify and strengthen the iterative processes of developers, scholars and researchers alike, making it easier for all parties to decompose, study and design a broad class of game designs and game artifacts.", "title": "" }, { "docid": "c2fc4e65c484486f5612f4006b6df102", "text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.", "title": "" }, { "docid": "18153ed3c2141500e0f245e3846df173", "text": "This paper presents the modeling and simulation of a 25 kV 50 Hz AC traction system using power system block set (PSB) / SIMULINK software package. The three-phase system with substations, track section with rectifier-fed DC locomotives and a detailed traction load are included in the model. The model has been used to study the effect of loading and fault conditions in 25 kV AC traction. The relay characteristic proposed is a combination of two quadrilaterals in the X-R plane. A brief summary of the hardware set-up used to implement and test the relay characteristic using a Texas Instruments TMS320C50 digital signal processor (DSP) has also been presented.", "title": "" }, { "docid": "a1d9c897f926fa4cc45ebc6209deb6bc", "text": "This paper addresses the relationship between the ego, id, and internal objects. While ego psychology views the ego as autonomous of the drives, a less well-known alternative position views the ego as constituted by the drives. Based on Freud's ego-instinct account, this position has developed into a school of thought which postulates that the drives act as knowers. Given that there are multiple drives, this position proposes that personality is constituted by multiple knowers. Following on from Freud, the ego is viewed as a composite sub-set of the instinctual drives (ego-drives), whereas those drives cut off from expression form the id. The nature of the \"self\" is developed in terms of identification and the possibility of multiple personalities is also established. This account is then extended to object-relations and the explanatory value of the ego-drive account is discussed in terms of the addressing the nature of ego-structures and the dynamic nature of internal objects. Finally, the impact of psychological conflict and the significance of repression for understanding the nature of splits within the psyche are also discussed.", "title": "" }, { "docid": "63ae128637d0855ca1b09793314aad03", "text": "Gray platelet syndrome (GPS) is a predominantly recessive platelet disorder that is characterized by mild thrombocytopenia with large platelets and a paucity of α-granules; these abnormalities cause mostly moderate but in rare cases severe bleeding. We sequenced the exomes of four unrelated individuals and identified NBEAL2 as the causative gene; it has no previously known function but is a member of a gene family that is involved in granule development. Silencing of nbeal2 in zebrafish abrogated thrombocyte formation.", "title": "" }, { "docid": "6eff790c76e7eb7016eef6d306a0dde0", "text": "To cite: Rozenblum R, Bates DW. BMJ Qual Saf 2013;22:183–186. Patients are central to healthcare delivery, yet all too often their perspectives and input have not been considered by providers. 2 This is beginning to change rapidly and is having a major impact across a range of dimensions. Patients are becoming more engaged in their care and patient-centred healthcare has emerged as a major domain of quality. At the same time, social media in particular and the internet more broadly are widely recognised as having produced huge effects across societies. For example, few would have predicted the Arab Spring, yet it was clearly enabled by media such as Facebook and Twitter. Now these technologies are beginning to pervade the healthcare space, just as they have so many others. But what will their effects be? These three domains—patient-centred healthcare, social media and the internet— are beginning to come together, with powerful and unpredictable consequences. We believe that they have the potential to create a major shift in how patients and healthcare organisations connect, in effect, the ‘perfect storm’, a phrase that has been used to describe a situation in which a rare combination of circumstances result in an event of unusual magnitude creating the potential for non-linear change. Historically, patients have paid relatively little attention to quality, safety and the experiences large groups of other patients have had, and have made choices about where to get healthcare based largely on factors like reputation, the recommendations of a friend or proximity. Part of the reason for this was that information about quality or the opinions of others about their care was hard to access before the internet. Today, patients appear to be becoming more engaged with their care in general, and one of the many results is that they are increasingly using the internet to share and rate their experiences of health care. They are also using the internet to connect with others having similar illnesses, to share experiences, and beginning to manage their illnesses by leveraging these technologies. While it is not yet clear what impact patients’ use of the internet and social media will have on healthcare, they will definitely have a major effect. Healthcare organisations have generally been laggards in this space—they need to start thinking about how they will use the internet in a variety of ways, with specific examples being leveraging the growing number of patients that are using the internet to describe their experiences of healthcare and how they can incorporate patient’s feedback via the internet into the organisational quality improvement process.", "title": "" }, { "docid": "8d5759855079e2ddaab2e920b93ca2a3", "text": "In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.", "title": "" }, { "docid": "164fca8833981d037f861aada01d5f7f", "text": "Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n) memory and O(n √ n) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.", "title": "" }, { "docid": "c08e33f44b8e27529385b1557906dc81", "text": "A key challenge in wireless cognitive radio networks is to maximize the total throughput also known as the sum rates of all the users while avoiding the interference of unlicensed band secondary users from overwhelming the licensed band primary users. We study the weighted sum rate maximization problem with both power budget and interference temperature constraints in a cognitive radio network. This problem is nonconvex and generally hard to solve. We propose a reformulation-relaxation technique that leverages nonnegative matrix theory to first obtain a relaxed problem with nonnegative matrix spectral radius constraints. A useful upper bound on the sum rates is then obtained by solving a convex optimization problem over a closed bounded convex set. It also enables the sum-rate optimality to be quantified analytically through the spectrum of specially-crafted nonnegative matrices. Furthermore, we obtain polynomial-time verifiable sufficient conditions that can identify polynomial-time solvable problem instances, which can be solved by a fixed-point algorithm. As a by-product, an interesting optimality equivalence between the nonconvex sum rate problem and the convex max-min rate problem is established. In the general case, we propose a global optimization algorithm by utilizing our convex relaxation and branch-and-bound to compute an ε-optimal solution. Our technique exploits the nonnegativity of the physical quantities, e.g., channel parameters, powers and rates, that enables key tools in nonnegative matrix theory such as the (linear and nonlinear) Perron-Frobenius theorem, quasi-invertibility, Friedland-Karlin inequalities to be employed naturally. Numerical results are presented to show that our proposed algorithms are theoretically sound and have relatively fast convergence time even for large-scale problems", "title": "" }, { "docid": "7c13ebe2897fc4870a152159cda62025", "text": "Tuberculosis (TB) remains a major health threat, killing nearly 2 million individuals around this globe, annually. The only vaccine, developed almost a century ago, provides limited protection only during childhood. After decades without the introduction of new antibiotics, several candidates are currently undergoing clinical investigation. Curing TB requires prolonged combination of chemotherapy with several drugs. Moreover, monitoring the success of therapy is questionable owing to the lack of reliable biomarkers. To substantially improve the situation, a detailed understanding of the cross-talk between human host and the pathogen Mycobacterium tuberculosis (Mtb) is vital. Principally, the enormous success of Mtb is based on three capacities: first, reprogramming of macrophages after primary infection/phagocytosis to prevent its own destruction; second, initiating the formation of well-organized granulomas, comprising different immune cells to create a confined environment for the host-pathogen standoff; third, the capability to shut down its own central metabolism, terminate replication, and thereby transit into a stage of dormancy rendering itself extremely resistant to host defense and drug treatment. Here, we review the molecular mechanisms underlying these processes, draw conclusions in a working model of mycobacterial dormancy, and highlight gaps in our understanding to be addressed in future research.", "title": "" }, { "docid": "5f8f9a407c42a6a3c6c269c22d36f684", "text": "This paper proposes a coarse-fine dual-loop architecture for the digital low drop-out (LDO) regulators with fast transient response and more than 200-mA load capacity. In the proposed scheme, the output voltage is coregulated by two loops, namely, the coarse loop and the fine loop. The coarse loop adopts a fast current-mirror flash analog to digital converter and supplies high output current to enhance the transient performance, while the fine loop delivers low output current and helps reduce the voltage ripples and improve the regulation accuracies. Besides, a digital controller is implemented to prevent contentions between the two loops. Fabricated in a 28-nm Samsung CMOS process, the proposed digital LDO achieves maximum load up to 200 mA when the input and the output voltages are 1.1 and 0.9 V, respectively, with a chip area of 0.021 mm2. The measured output voltage drop of around 120 mV is observed for a load step of 180 mA.", "title": "" }, { "docid": "c71cfc228764fc96e7e747e119445939", "text": "This review discusses and summarizes the recent developments and advances in the use of biodegradable materials for bone repair purposes. The choice between using degradable and non-degradable devices for orthopedic and maxillofacial applications must be carefully weighed. Traditional biodegradable devices for osteosynthesis have been successful in low or mild load bearing applications. However, continuing research and recent developments in the field of material science has resulted in development of biomaterials with improved strength and mechanical properties. For this purpose, biodegradable materials, including polymers, ceramics and magnesium alloys have attracted much attention for osteologic repair and applications. The next generation of biodegradable materials would benefit from recent knowledge gained regarding cell material interactions, with better control of interfacing between the material and the surrounding bone tissue. The next generations of biodegradable materials for bone repair and regeneration applications require better control of interfacing between the material and the surrounding bone tissue. Also, the mechanical properties and degradation/resorption profiles of these materials require further improvement to broaden their use and achieve better clinical results.", "title": "" }, { "docid": "4142b1fc9e37ffadc6950105c3d99749", "text": "Just-noticeable distortion (JND), which refers to the maximum distortion that the human visual system (HVS) cannot perceive, plays an important role in perceptual image and video processing. In comparison with JND estimation for images, estimation of the JND profile for video needs to take into account the temporal HVS properties in addition to the spatial properties. In this paper, we develop a spatio-temporal model estimating JND in the discrete cosine transform domain. The proposed model incorporates the spatio-temporal contrast sensitivity function, the influence of eye movements, luminance adaptation, and contrast masking to be more consistent with human perception. It is capable of yielding JNDs for both still images and video with significant motion. The experiments conducted in this study have demonstrated that the JND values estimated for video sequences with moving objects by the model are in line with the HVS perception. The accurate JND estimation of the video towards the actual visibility bounds can be translated into resource savings (e.g., for bandwidth/storage or computation) and performance improvement in video coding and other visual processing tasks (such as perceptual quality evaluation, visual signal restoration/enhancement, watermarking, authentication, and error protection)", "title": "" }, { "docid": "d9bd23208ab6eb8688afea408a4c9eba", "text": "A novel ultra-wideband (UWB) bandpass filter with 5 to 6 GHz rejection band is proposed. The multiple coupled line structure is incorporated with multiple-mode resonator (MMR) to provide wide transmission band and enhance out-of band performance. To inhibit the signals ranged from 5- to 6-GHz, four stepped-impedance open stubs are implemented on the MMR without increasing the size of the proposed filter. The design of the proposed UWB filter has two transmission bands. The first passband from 2.8 GHz to 5 GHz has less than 2 dB insertion loss and greater than 18 dB return loss. The second passband within 6 GHz and 10.6 GHz has less than 1.5 dB insertion loss and greater than 15 dB return loss. The rejection at 5.5 GHz is better than 50 dB. This filter can be integrated in UWB radio systems and efficiently enhance the interference immunity from WLAN.", "title": "" } ]
scidocsrr
2dc4169b3e26ddb5c186a6d7d2c18c71
Training an End-to-End System for Handwritten Mathematical Expression Recognition by Generated Patterns
[ { "docid": "b4ab51818d868b2f9796540c71a7bd17", "text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.", "title": "" }, { "docid": "83e5f62d7f091260d4ae91c2d8f72d3d", "text": "Document recognition and retrieval technologies complement one another, providing improved access to increasingly large document collections. While recognition and retrieval of textual information is fairly mature, with wide-spread availability of optical character recognition and text-based search engines, recognition and retrieval of graphics such as images, figures, tables, diagrams, and mathematical expressions are in comparatively early stages of research. This paper surveys the state of the art in recognition and retrieval of mathematical expressions, organized around four key problems in math retrieval (query construction, normalization, indexing, and relevance feedback), and four key problems in math recognition (detecting expressions, detecting and classifying symbols, analyzing symbol layout, and constructing a representation of meaning). Of special interest is the machine learning problem of jointly optimizing the component algorithms in a math recognition system, and developing effective indexing, retrieval and relevance feedback algorithms for math retrieval. Another important open problem is developing user interfaces that seamlessly integrate recognition and retrieval. Activity in these important research areas is increasing, in part because math notation provides an excellent domain for studying problems common to many document and graphics recognition and retrieval applications, and also because mature applications will likely provide substantial benefits for education, research, and mathematical literacy.", "title": "" } ]
[ { "docid": "715d63ebb1316f7c35fd98871297b7d9", "text": "1. Associate Professor of Oncology of the State University of Ceará; Clinical Director of the Cancer Hospital of Ceará 2. Resident in Urology of Urology Department of the Federal University of Ceará 3. Associate Professor of Urology of the State University of Ceará; Assistant of the Division of Uro-Oncology, Cancer Hospital of Ceará 4. Professor of Urology Department of the Federal University of Ceará; Chief of Division of Uro-Oncology, Cancer Hospital of Ceará", "title": "" }, { "docid": "9cbd6482f91788583a5aca850d9652a6", "text": "Evaluating a visualization that depicts uncertainty is fraught with challenges due to the complex psychology of uncertainty. However, relatively little attention is paid to selecting and motivating a chosen interpretation or elicitation method for subjective probabilities in the uncertainty visualization literature. I survey existing evaluation work in uncertainty visualization, and examine how research in judgment and decision-making that focuses on subjective uncertainty elicitation sheds light on common approaches in visualization. I propose suggestions for practice aimed at reducing errors and noise related to how ground truth is defined for subjective probability estimates, the choice of an elicitation method, and the strategies used by subjects making judgments with an uncertainty visualization.", "title": "" }, { "docid": "b5e811e4ae761c185c6e545729df5743", "text": "Sleep assessment is of great importance in the diagnosis and treatment of sleep disorders. In clinical practice this is typically performed based on polysomnography recordings and manual sleep staging by experts. This procedure has the disadvantages that the measurements are cumbersome, may have a negative influence on the sleep, and the clinical assessment is labor intensive. Addressing the latter, there has recently been encouraging progress in the field of automatic sleep staging [1]. Furthermore, a minimally obtrusive method for recording EEG from electrodes in the ear (ear-EEG) has recently been proposed [2]. The objective of this study was to investigate the feasibility of automatic sleep stage classification based on ear-EEG. This paper presents a preliminary study based on recordings from a total of 18 subjects. Sleep scoring was performed by a clinical expert based on frontal, central and occipital region EEG, as well as EOG and EMG. 5 subjects were excluded from the study because of alpha wave contamination. In one subject the standard polysomnography was supplemented by ear-EEG. A single EEG channel sleep stage classifier was implemented using the same features and the same classifier as proposed in [1]. The performance of the single channel sleep classifier based on the scalp recordings showed an 85.7 % agreement with the manual expert scoring through 10-fold inter-subject cross validation, while the performance of the ear-EEG recordings was based on a 10-fold intra-subject cross validation and showed an 82 % agreement with the manual scoring. These results suggest that automatic sleep stage classification based on ear-EEG recordings may provide similar performance as compared to single channel scalp EEG sleep stage classification. Thereby ear-EEG may be a feasible technology for future minimal intrusive sleep stage classification.", "title": "" }, { "docid": "b8b82691002e3d694d5766ea3269a78e", "text": "This article presents a framework for improving the Software Configuration Management (SCM) process, that includes a maturity model to assess software organizations and an approach to guide the transition from diagnosis to action planning. The maturity model and assessment tool are useful to identify the degree of satisfaction for practices considered key for SCM. The transition approach is also important because the application of a model to produce a diagnosis is just a first step, organizations are demanding the generation of action plans to implement the recommendations. The proposed framework has been used to assess a number of software organizations and to generate the basis to build an action plan for improvement. In summary, this article shows that the maturity model and action planning approach are instrumental to reach higher SCM control and visibility, therefore producing higher quality software.", "title": "" }, { "docid": "6b953e7b796d2290f4f70b66be6609b2", "text": "BACKGROUND\nPatients suffering from symptomatic macromastia are usually underserved, as they have to put up with very long waiting lists and are usually selected under restrictive criteria. The Oncoplastic Breast Surgery subspeciality requires a cross-specialty training, which is difficult, in particular, for trainees who have a background in general surgery, and not easily available. The introduction of reduction mammaplasty into a Breast Cancer Unit as treatment for symptomatic macromastia could have a synergic effect, making the scarce therapeutic offer at present available to these patients, who are usually treated in Plastic Departments, somewhat larger, and accelerating the uptake of oncoplastic training as a whole and, specifically, the oncoplastic breast conserving procedures based on the reduction mammaplasty techniques such as displacement conservative techniques and onco-therapeutic mammaplasty. This is a retrospective study analyzing the outcome of reduction mammaplasty for symptomatic macromastia in our Breast Cancer Unit.\n\n\nMETHODS\nA cohort study of 56 patients who underwent bilateral reduction mammaplasty at our Breast Unit between 2005 and 2009 were evaluated; morbidity and patient satisfaction were considered as end points. Data were collected by reviewing medical records and interviewing patients.\n\n\nRESULTS\nEight patients (14.28%) presented complications in the early postoperative period, two of them being reoperated on. The physical symptoms disappeared or significantly improved in 88% of patients and the degree of satisfaction with the care process and with the overall outcome were really high.\n\n\nCONCLUSION\nOur experience of the introduction of reduction mammaplasty in our Breast Cancer Unit has given good results, enabling us to learn the use of different reduction mammaplasty techniques using several pedicles which made it possible to perform oncoplastic breast conserving surgery. In our opinion, this management policy could bring clear advantages both to patients (large-breasted and those with a breast cancer) and surgeons.", "title": "" }, { "docid": "35e25626cbcbb00fd36a9532a82de5d7", "text": "Resistance to chemotherapy and molecularly targeted therapies is a major problem facing current cancer research. The mechanisms of resistance to 'classical' cytotoxic chemotherapeutics and to therapies that are designed to be selective for specific molecular targets share many features, such as alterations in the drug target, activation of prosurvival pathways and ineffective induction of cell death. With the increasing arsenal of anticancer agents, improving preclinical models and the advent of powerful high-throughput screening techniques, there are now unprecedented opportunities to understand and overcome drug resistance through the clinical assessment of rational therapeutic drug combinations and the use of predictive biomarkers to enable patient stratification.", "title": "" }, { "docid": "06dc65e282b7be8d67faacdf71b4aca7", "text": "Audiovisual fusion is one of the most challenging tasks that continues to attract substantial research interest in the field of audiovisual automatic speech recognition AV-ASR. In the last few decades, many approaches for integrating the audio and video modalities were proposed to enhance the performance of automatic speech recognition in both clean and noisy conditions. However, very few studies can be found in the literature that compare different fusion models for AV-ASR. Even less research work compares audiovisual fusion models for large vocabulary continuous speech recognition LVCSR models using deep neural networks DNNs. This paper reviews and compares the performance of five audiovisual fusion models: the feature fusion model, the decision fusion model, the multistream hidden Markov model HMM, the coupled HMM, and the turbo decoders. A complete evaluation of these fusion models is conducted using a standard speaker-independent DNN-based LVCSR Kaldi recipe in three experimental setups: a clean-train-clean-test, a clean-train-noisy-test, and a matched-training setup. All experiments have been applied to the recently released NTCD-TIMIT audiovisual corpus. The task of NTCD-TIMIT is phone recognition in continuous speech. Using NTCD-TIMIT with its freely available visual features and 37 clean and noisy acoustic signals allows for this study to be a common benchmark, to which novel LVCSR AV-ASR models and approaches can be compared.", "title": "" }, { "docid": "698fb992c5ff7ecc8d2e153f6b385522", "text": "We investigate bag-of-visual-words (BOVW) approaches to land-use classification in high-resolution overhead imagery. We consider a standard non-spatial representation in which the frequencies but not the locations of quantized image features are used to discriminate between classes analogous to how words are used for text document classification without regard to their order of occurrence. We also consider two spatial extensions, the established spatial pyramid match kernel which considers the absolute spatial arrangement of the image features, as well as a novel method which we term the spatial co-occurrence kernel that considers the relative arrangement. These extensions are motivated by the importance of spatial structure in geographic data.\n The methods are evaluated using a large ground truth image dataset of 21 land-use classes. In addition to comparisons with standard approaches, we perform extensive evaluation of different configurations such as the size of the visual dictionaries used to derive the BOVW representations and the scale at which the spatial relationships are considered.\n We show that even though BOVW approaches do not necessarily perform better than the best standard approaches overall, they represent a robust alternative that is more effective for certain land-use classes. We also show that extending the BOVW approach with our proposed spatial co-occurrence kernel consistently improves performance.", "title": "" }, { "docid": "7c950863f51cbce128a37e50d78ec25f", "text": "We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.", "title": "" }, { "docid": "be83224a853fd65808def16ff20e9c02", "text": "Cascades of information-sharing are a primary mechanism by which content reaches its audience on social media, and an active line of research has studied how such cascades, which form as content is reshared from person to person, develop and subside. In this paper, we perform a large-scale analysis of cascades on Facebook over significantly longer time scales, and find that a more complex picture emerges, in which many large cascades recur, exhibiting multiple bursts of popularity with periods of quiescence in between. We characterize recurrence by measuring the time elapsed between bursts, their overlap and proximity in the social network, and the diversity in the demographics of individuals participating in each peak. We discover that content virality, as revealed by its initial popularity, is a main driver of recurrence, with the availability of multiple copies of that content helping to spark new bursts. Still, beyond a certain popularity of content, the rate of recurrence drops as cascades start exhausting the population of interested individuals. We reproduce these observed patterns in a simple model of content recurrence simulated on a real social network. Using only characteristics of a cascade’s initial burst, we demonstrate strong performance in predicting whether it will recur in the future.", "title": "" }, { "docid": "5daa99fa5e6dbfacbdd299c95d53cb6a", "text": "Data from the Indian National Family Health Survey, 2005-06 were used to explore how pregnancy intention at the time of conception influences a variety of maternal and child health and health care outcomes. Results indicate that mistimed children are more likely than wanted children to be delivered without a skilled attendant present (OR = 1.3), to not receive all recommended vaccinations (OR = 1.4), and to die during the neonatal and postneonatal periods (OR = 1.8 and 2.6, respectively). Unwanted children are more likely than wanted children to not receive all recommended vaccinations (OR = 2.2), to be stunted (OR = 1.3), and to die during the neonatal, postneonatal, and early childhood periods (OR = 2.2, 3.6, and 5.9, respectively). Given the high levels of unintended fertility in India (21 per cent of all births), these are striking findings that underscore the importance of investments in family planning.", "title": "" }, { "docid": "c8a16019564d99007efd88ca23d44d30", "text": "Cardiac masses are rare entities that can be broadly categorized as either neoplastic or non-neoplastic. Neoplastic masses include benign and malignant tumors. In the heart, metastatic tumors are more common than primary malignant tumors. Whether incidentally found or diagnosed as a result of patients' symptoms, cardiac masses can be identified and further characterized by a range of cardiovascular imaging options. While echocardiography remains the first-line imaging modality, cardiac computed tomography (cardiac CT) has become an increasingly utilized modality for the assessment of cardiac masses, especially when other imaging modalities are non-diagnostic or contraindicated. With high isotropic spatial and temporal resolution, fast acquisition times, and multiplanar image reconstruction capabilities, cardiac CT offers an alternative to cardiovascular magnetic resonance imaging in many patients. Additionally, cardiac masses may be incidentally discovered during cardiac CT for other reasons, requiring imagers to understand the unique features of a diverse range of cardiac masses. Herein, we define the characteristic imaging features of commonly encountered and selected cardiac masses and define the role of cardiac CT among noninvasive imaging options.", "title": "" }, { "docid": "148af36df5a403b33113ee5b9a7ad1d3", "text": "The experience of interacting with a robot has been shown to be very different in comparison to people’s interaction experience with other technologies and artifacts, and often has a strong social or emotional component – a fact that raises concerns related to evaluation. In this paper we outline how this difference is due in part to the general complexity of robots’ overall context of interaction, related to their dynamic presence in the real world and their tendency to invoke a sense of agency. A growing body of work in Human-Robot Interaction (HRI) focuses on exploring this overall context and tries to unpack what exactly is unique about interaction with robots, often through leveraging evaluation methods and frameworks designed for more-traditional HCI. We raise the concern that, due to these differences, HCI evaluation methods should be applied to HRI with care, and we present a survey of HCI evaluation techJames E. Young University of Calgary, Canada, The University of Tokyo, Japan E-mail: jim.young@ucalgary.ca JaYoung Sung Georgia Institute of Technology, GA, U.S.A. E-mail: jsung@cc.gatech.edu Amy Voida University of Calgary, Canada E-mail: avoida@ucalgary.ca Ehud Sharlin University of Calgary, Canada E-mail: ehud@cpsc.ucalgary.ca Takeo Igarashi The University of Tokyo, Japan, JST ERATO, Japan E-mail: takeo@acm.org Henrik I. Christensen Georgia Institute of Technology, GA, U.S.A. E-mail: hic@cc.gatech.edu Rebecca E. Grinter Georgia Institute of Technology, GA, U.S.A. E-mail: beki@cc.gatech.edu niques from the perspective of the unique challenges of robots. Further, we have developed a new set of tools to aid evaluators in targeting and unpacking the holistic human-robot interaction experience. Our technique surrounds the development of a map of interaction experience possibilities and, as part of this, we present a set of three perspectives for targeting specific components of interaction experience, and demonstrate how these tools can be practically used in evaluation. CR Subject Classification H.1.2 [Models and principles]: user/machine systems–software psychology", "title": "" }, { "docid": "91c0870355730f553f1dc104318bc55c", "text": "This paper reviews the main psychological phenomena of inductive reasoning, covering 25 years of experimental and model-based research, in particular addressing four questions. First, what makes a case or event generalizable to other cases? Second, what makes a set of cases generalizable? Third, what makes a property or predicate projectable? Fourth, how do psychological models of induction address these results? The key results in inductive reasoning are outlined, and several recent models, including a new Bayesian account, are evaluated with respect to these results. In addition, future directions for experimental and model-based work are proposed.", "title": "" }, { "docid": "d80fbd6e24d93991c8a64a8ecfb37d92", "text": "THE DEVELOPMENT OF PHYSICAL FITNESS IN YOUNG ATHLETES IS A RAPIDLY EXPANDING FIELD OF INTEREST FOR STRENGTH AND CONDITIONING COACHES, PHYSICAL EDUCATORS, SPORTS COACHES, AND PARENTS. PREVIOUS LONG-TERM ATHLETE DEVELOPMENT MODELS HAVE CLASSIFIED YOUTH-BASED TRAINING METHODOLOGIES IN RELATION TO CHRONOLOGIC AGE GROUPS, AN APPROACH THAT HAS DISTINCT LIMITATIONS. MORE RECENT MODELS HAVE ATTEMPTED TO BRIDGE MATURATION AND PERIODS OF TRAINABILITY FOR A LIMITED NUMBER OF FITNESS QUALITIES, ALTHOUGH SUCH MODELS APPEAR TO BE BASED ON SUBJECTIVE ANALYSIS. THE YOUTH PHYSICAL DEVELOPMENT MODEL PROVIDES A LOGICAL AND EVIDENCE-BASED APPROACH TO THE SYSTEMATIC DEVELOPMENT OF PHYSICAL PERFORMANCE IN YOUNG ATHLETES.", "title": "" }, { "docid": "0fe95e1e3f848d8ed1bc4b54c9ccfc5d", "text": "Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system.", "title": "" }, { "docid": "cb187dd9d739cdfccdf0adb3cdb0027a", "text": "A solid-phase microextraction (SPME) procedure involving direct contact between the SPME fibers and the solid matrix and subsequent gas chromatography/mass spectrometric analysis for the detection of accelerants in fire debris is described. The extraction performances of six fibers (100 mum polydimethylsiloxane, 65 mum polydimethylsiloxane-divinylbenzene, 85 mum polyacrylate, 85 mum carboxen-polydimethylsiloxane, 70 mum Carbowax-divinylbenzene, and 50/30 mum divinylbenzene-Carboxen-polydimethylsiloxane) were investigated by directly immersing the fibers into gasoline, kerosene, and diesel fuel. For simulated fire debris, in the direct contact extraction method, the SPME fiber was kept in contact with the fire debris matrix during extraction by penetrating plastic bags wrapping the sample. This method gave comparable results to the headspace SPME method in the extraction of gasoline and kerosene, and gave an improved recovery of low-volatile components in the extraction of diesel fuel from fire debris. The results demonstrate that this procedure is suitable as a simple and rapid screening method for detecting ignitable liquids in fire debris packed in plastic bags.", "title": "" }, { "docid": "17ecf3c7b53e81642cf0cb2d75c2bfb3", "text": "Serverless computing is widely known as an event-driven cloud execution model. In this model, the client provides the code and the cloud provider manages the life-cycle of the execution environment of that code. The idea is based on reducing the life span of the program to execute functionality in response to an event. Hence, the program's processes are born when an event is triggered and are killed after the event is processed. This model has proved its usefulness in the cloud as it reduced the operational cost and complexity of executing event-driven workloads. In this paper we argue that the serverless model does not have to be limited the to the cloud. We show how the same model can be applied at the micro-level of a single machine. In such model, certain operating system commands are treated as events that trigger a serverless reaction. This reaction consists of deploying and running code only in response to those events. Thus, reducing the attack surface and complexity of managing single machines.", "title": "" }, { "docid": "b191e7773eecc2562b1261e97ae0b0f4", "text": "The American journal 0/ Occupational Therapl' ThiS case report describes the effects of deeppressure tactile stimulation in reducing self-stimulating behaviors in a child with multiple disabilities including autism. These behaviors include hitting the hands together, one hand on top of the other, so that the palm of one hand hits the dorsum of the other, or hitting a surface with one or both hands. Such behaviors not only made classroom efforts to have her use her hands for selfcare functions such as holding an adapted spoon difficult or impossible, but also called attention to her disabling condition. These behaviors also were disruptive and noisy.", "title": "" }, { "docid": "d8056ee6b9d1eed4bc25e302c737780c", "text": "This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for Web pages independent of their textual content, solely based on the hyperlink structure of the Web. PageRank is typically used as a Web Search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much more complex challenge. Recently, significant effort has been invested in building sets of personalized PageRank vectors. PageRank is also used in many diverse applications other than ranking. Below we are interested in the theoretical foundations of the PageRank formulation, in accelerating of PageRank computing, in the effects of particular aspects of Web graph structure on optimal organization of computations, and in PageRank stability. We also review alternative models that lead to authority indices similar to PageRank and the role of such indices in applications other than Web Search. We also discuss link-based search personalization and outline some aspects of PageRank infrastructure from associated measures of convergence to link preprocessing. Content", "title": "" } ]
scidocsrr
86edae88e7564ca49d1363ef0b91f495
PassFrame: Generating image-based passwords from egocentric videos
[ { "docid": "c2402cea6e52ee98bc0c3de084580194", "text": "We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.", "title": "" }, { "docid": "2abaa413435e2d2c2b31bbe08c569d4a", "text": "We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person's activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.", "title": "" } ]
[ { "docid": "483881d2c4ab6b25b019bdf1ebd75913", "text": "Copyright: © 2018 The Author(s) Abstract. In the last few years, leading-edge research from information systems, strategic management, and economics have separately informed our understanding of platforms and infrastructures in the digital age. Our motivation for undertaking this special issue rests in the conviction that it is significant to discuss platforms and infrastructures concomitantly, while enabling knowledge from diverse disciplines to cross-pollinate to address critical, pressing policy challenges and inform strategic thinking across both social and business spheres. In this editorial, we review key insights from the literature on digital infrastructures and platforms, present emerging research themes, highlight the contributions developed from each of the six articles in this special issue, and conclude with suggestions for further research.", "title": "" }, { "docid": "a587f915047435362cbad288e5f679db", "text": "OBJECTIVES\nThe American Academy of Pediatrics recommends that children over age 2 years spend < or = 2 hours per day with screen media, because excessive viewing has been linked to a plethora of physical, academic, and behavioral problems. The primary goal of this study was to qualitatively explore how a recommendation to limit television viewing might be received and responded to by a diverse sample of parents and their school-age children.\n\n\nMETHODS\nThe study collected background data about media use, gathered a household media inventory, and conducted in-depth individual and small group interviews with 180 parents and children ages 6 to 13 years old.\n\n\nRESULTS\nMost of the children reported spending approximately 3 hours per day watching television. The average home in this sample had 4 television sets; nearly two thirds had a television in the child's bedroom, and nearly half had a television set in the kitchen or dining room. Although virtually all of the parents reported having guidelines for children's television viewing, few had rules restricting the time children spend watching television. Data from this exploratory study suggest several potential barriers to implementing a 2-hour limit, including: parents' need to use television as a safe and affordable distraction, parents' own heavy television viewing patterns, the role that television plays in the family's day-to-day routine, and a belief that children should spend their weekend leisure time as they wish. Interviews revealed that for many of these families there is a lack of concern that television viewing is a problem for their child, and there remains confusion about the boundaries of the recommendation of the American Academy of Pediatrics.\n\n\nCONCLUSIONS\nParents in this study expressed interest in taking steps toward reducing children's television time but also uncertainty about how to go about doing so. Results suggest possible strategies to reduce the amount of time children spend in front of the screen.", "title": "" }, { "docid": "80ee5071f8b905d9f04a6fcad46b6004", "text": "The prediction of the future path of the ego vehicle and of other vehicles in the road environment is very important for safety applications, especially for collision avoidance systems. Today's available advanced driver assistance systems are mainly based on sensors that are installed in the vehicle. Due to the evolution of wireless networks the current trend is to exploit the cooperation among vehicles to enhance road safety. In this paper a cooperative path prediction algorithm is presented. This algorithm gathers position, velocity and yaw rate measurements from all vehicles in order to calculate the future paths. A specific care is taken for the manipulation of the latency of the wireless vehicular network. Also map data concerning the road geometry are used to enhance the estimation of path prediction. This work shows both the advances of using communications among road users and the corresponding challenges.", "title": "" }, { "docid": "08c2f734622b3ba4c3d71373139b9d58", "text": "International Journal of Exercise Science 6(4) : 310-319, 2013. This study was designed to compare the acute effect of self-myofascial release (SMR), postural alignment exercises, and static stretching on joint range-of-motion. Our sample included 27 participants (n = 14 males and n = 13 females) who had below average joint range-of-motion (specifically a sitand-reach score of 13.5 inches [34.3 cm] or less). All were university students 18–27 years randomly assigned to complete two 30–40-minute data collection sessions with each testing session consisting of three sit-and-reach measurements (which involved lumbar spinal flexion, hip flexion, knee extension, and ankle dorsiflexion) interspersed with two treatments. Each treatment included foam-rolling, postural alignment exercises, or static stretching. Participants were assigned to complete session 1 and session 2 on two separate days, 24 hours to 48 hours apart. The data were analyzed so carryover effects could be estimated and showed that no single acute treatment significantly increased posterior mean sit-and-reach scores. However, significant gains (95% posterior probability limits) were realized with both postural alignment exercises and static stretching when used in combination with foam-rolling. For example, the posterior means equaled 1.71 inches (4.34 cm) when postural alignment exercises were followed by foam-rolling; 1.76 inches (4.47 cm) when foam-rolling was followed by static stretching; 1.49 inches (3.78 cm) when static stretching was followed by foam-rolling; and 1.18 inches (2.99 cm) when foam-rolling was followed by postural alignment exercises. Our results demonstrate that an acute treatment of foam-rolling significantly increased joint range-of-motion in participants with below average joint range-of-motion when combined with either postural alignment exercises or static stretching.", "title": "" }, { "docid": "36d2349c7a6643e3664089ed30ed2b62", "text": "Knowing the relative riskiness of di erent types of credit exposure is important for policymakers designing regulatory capital requirements and for rms allocating economic capital. This paper analyzes the risk structure of credit exposures with di erent maturities and credit qualities. We focus particularly on risks associated with (i) ratings transitions and (ii) spread changes for given ratings. We show that, for high quality debt, most risk stems from spread changes. This is signi cant because several recently proposed pricing and credit risk models assume zero spread risk. JEL Nos: C25,G21,G33.", "title": "" }, { "docid": "a203839d7ec2ca286ac93435aa552159", "text": "Boxer is a semantic parser for English texts with many input and output possibilities, and various ways to perform meaning analysis based on Discourse Representation Theory. This involves the various ways that meaning representations can be computed, as well as their possible semantic ingredients.", "title": "" }, { "docid": "1f52a93eff0c020564acc986b2fef0e7", "text": "The performance of a predictive model is overestimated when simply determined on the sample of subjects that was used to construct the model. Several internal validation methods are available that aim to provide a more accurate estimate of model performance in new subjects. We evaluated several variants of split-sample, cross-validation and bootstrapping methods with a logistic regression model that included eight predictors for 30-day mortality after an acute myocardial infarction. Random samples with a size between n = 572 and n = 9165 were drawn from a large data set (GUSTO-I; n = 40,830; 2851 deaths) to reflect modeling in data sets with between 5 and 80 events per variable. Independent performance was determined on the remaining subjects. Performance measures included discriminative ability, calibration and overall accuracy. We found that split-sample analyses gave overly pessimistic estimates of performance, with large variability. Cross-validation on 10% of the sample had low bias and low variability, but was not suitable for all performance measures. Internal validity could best be estimated with bootstrapping, which provided stable estimates with low bias. We conclude that split-sample validation is inefficient, and recommend bootstrapping for estimation of internal validity of a predictive logistic regression model.", "title": "" }, { "docid": "bb335297dae74b8c5f45666d8ccb1c6b", "text": "The popularity of Twitter attracts more and more spammers. Spammers send unwanted tweets to Twitter users to promote websites or services, which are harmful to normal users. In order to stop spammers, researchers have proposed a number of mechanisms. The focus of recent works is on the application of machine learning techniques into Twitter spam detection. However, tweets are retrieved in a streaming way, and Twitter provides the Streaming API for developers and researchers to access public tweets in real time. There lacks a performance evaluation of existing machine learning-based streaming spam detection methods. In this paper, we bridged the gap by carrying out a performance evaluation, which was from three different aspects of data, feature, and model. A big ground-truth of over 600 million public tweets was created by using a commercial URL-based security tool. For real-time spam detection, we further extracted 12 lightweight features for tweet representation. Spam detection was then transformed to a binary classification problem in the feature space and can be solved by conventional machine learning algorithms. We evaluated the impact of different factors to the spam detection performance, which included spam to nonspam ratio, feature discretization, training data size, data sampling, time-related data, and machine learning algorithms. The results show the streaming spam tweet detection is still a big challenge and a robust detection technique should take into account the three aspects of data, feature, and model.", "title": "" }, { "docid": "de21af25cede39d42c1064e626c621cb", "text": "This study examined the polyphenol composition and antioxidant properties of methanolic extracts from amaranth, quinoa, buckwheat and wheat, and evaluated how these properties were affected following two types of processing: sprouting and baking. The total phenol content amongst the seed extracts were significantly higher in buckwheat (323.4 mgGAE/100 g) and decreased in the following order: buckwheat > quinoa > wheat > amaranth. Antioxidant capacity, measured by the radical 2,2-diphenyl-1-picylhydrazyl scavenging capacity and the ferric ion reducing antioxidant power assays was also highest for buckwheat seed extract (p < 0.01). Total phenol content and antioxidant activity was generally found to increase with sprouting, and a decrease in levels was observed following breadmaking. Analysis by liquid chromatography coupled with diode array detector revealed the presence of phenolic acids, catechins, flavanol, flavone and flavonol glycosides. Overall, quinoa and buckwheat seeds and sprouts represent potential rich sources of polyphenol compounds for enhancing the nutritive properties of foods such as gluten-free breads. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "05f36ee9c051f8f9ea6e48d4fdd28dae", "text": "While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching . In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension. Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class. A preliminary version of this paper appeared in the Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 303{314. August 1991. Most of this research was carried out while both authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, NSF Grant CCR-88914428, and a grant from the Siemens Corporation. S. Goldman is currently supported in part by a G.E. Foundation Junior Faculty Grant and NSF Grant CCR-9110108.", "title": "" }, { "docid": "9c04940c0c3f0785194b569195563e80", "text": "bottle under his left foot. He woke up one morning and noticed burns on the lateral aspect of the foot. He did not seek any medical help and chose to dress the burn himself. After 2 weeks he noticed a foul smelling discharge from the wound and went to the Accident and Emergency (A&E) department. He was subsequently referred to our hospital department of plastic surgery. On examination he had a large burn on the lateral aspect of the left foot with a thick leathery eschar, which was starting to separate. There was also purulent discharge from the wound, as well as surrounding erythema (Figure 1). He was treated with antibiotics and dressings and was taken to the operating theatre after a few days for debridement of the burn and split skin grafting. The graft was inspected after 5 days and was found to have taken well. The second patient was a 57-year-old man with non-insulin-dependent diabetes mellitus, who was taking metformin and rosiglitazone and who also had peripheral neuropathy to both feet. He stated that he had not received any specific advice about avoiding accidental injuries to insensate body parts. He used a hot water bottle filled with water from a kettle for warming his feet. He had slept with the bottle between his feet and woke up one morning to notice burns over both feet. He went to the nearest A&E department and was referred to our unit subsequently. On examination he had burns over the lateral aspect of his left foot and the medial aspect of his Rajive Mathew Jose, Ramesh Vidyadharan, Deb Kumar Roy and Matt Erdmann", "title": "" }, { "docid": "a2b9c5f2b6299d0de91d80f9316a02e7", "text": "In this paper, with the help of knowledge base, we build and formulate a semantic space to connect the source and target languages, and apply it to the sequence-to-sequence framework to propose a Knowledge-Based Semantic Embedding (KBSE) method. In our KBSE method, the source sentence is firstly mapped into a knowledge based semantic space, and the target sentence is generated using a recurrent neural network with the internal meaning preserved. Experiments are conducted on two translation tasks, the electric business data and movie data, and the results show that our proposed method can achieve outstanding performance, compared with both the traditional SMT methods and the existing encoder-decoder models.", "title": "" }, { "docid": "47dbbd8eb1c52c4f7f115c9553bf4c8a", "text": "We study the problem of learning the best Bayesian network structure with respect to a decomposable score such as BDe, BIC or AIC. This problem is known to be NP-hard, which means that solving it becomes quickly infeasible as the number of variables increases. Nevertheless, in this paper we show that it is possible to learn the best Bayesian network structure with over 30 variables, which covers many practically interesting cases. Our algorithm is less complicated and more efficient than the techniques presented earlier. It can be easily parallelized, and offers a possibility for efficient exploration of the best networks consistent with different variable orderings. In the experimental part of the paper we compare the performance of the algorithm to the previous state-of-the-art algorithm. Free source-code and an online-demo can be found at http://b-course.hiit.fi/bene.", "title": "" }, { "docid": "54465eccc901a8258b5b6633c4c36958", "text": "Melatonin (5-methoxy-N-acetyltryptamine), dubbed the hormone of darkness, is released following a circadian rhythm with high levels at night. It provides circadian and seasonal timing cues through activation of G protein-coupled receptors (GPCRs) in target tissues (1). The discovery of selective melatonin receptor ligands and the creation of mice with targeted disruption of melatonin receptor genes are valuable tools to investigate the localization and functional roles of the receptors in native systems. Here we describe the pharmacological characteristics of melatonin receptor ligands and their various efficacies (agonist, antagonist, or inverse agonist), which can vary depending on tissue and cellular milieu. We also review melatonin-mediated responses through activation of melatonin receptors (MT1, MT2, and MT3) highlighting their involvement in modulation of CNS, hypothalamic-hypophyseal-gonadal axis, cardiovascular, and immune functions. For example, activation of the MT1 melatonin receptor inhibits neuronal firing rate in the suprachiasmatic nucleus (SCN) and prolactin secretion from the pars tuberalis and induces vasoconstriction. Activation of the MT2 melatonin receptor phase shifts circadian rhythms generated within the SCN, inhibits dopamine release in the retina, induces vasodilation, enhances splenocyte proliferation and inhibits leukocyte rolling in the microvasculature. Activation of the MT3 melatonin receptor reduces intraocular pressure and inhibits leukotriene B4-induced leukocyte adhesion. We conclude that an accurate characterization of melatonin receptors mediating specific functions in native tissues can only be made using receptor specific ligands, with the understanding that receptor ligands may change efficacy in both native tissues and heterologous expression systems.", "title": "" }, { "docid": "c14e8760bf0a405519579f73d870cf1b", "text": "Unlike a univariate decision tree, a multivariate decision tree is not restricted to splits of the instance space that are orthogonal to the features' axes. This article addresses several issues for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coefficients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. We present several new methods for forming multivariate decision trees and compare them with several well-known methods. We compare the different methods across a variety of learning tasks, in order to assess each method's ability to find concise, accurate decision trees. The results demonstrate that some multivariate methods are in general more effective than others (in the context of our experimental assumptions). In addition, the experiments confirm that allowing multivariate tests generally improves the accuracy of the resulting decision tree over a univariate tree.", "title": "" }, { "docid": "8de25881e8a5f12f891656f271c44d4d", "text": "Forest fires play a critical role in landscape transformation, vegetation succession, soil degradation and air quality. Improvements in fire risk estimation are vital to reduce the negative impacts of fire, either by lessen burn severity or intensity through fuel management, or by aiding the natural vegetation recovery using post-fire treatments. This paper presents the methods to generate the input variables and the risk integration developed within the Firemap project (funded under the Spanish Ministry of Science and Technology) to map wildland fire risk for several regions of Spain. After defining the conceptual scheme for fire risk assessment, the paper describes the methods used to generate the risk parameters, and presents", "title": "" }, { "docid": "74fcade8e5f5f93f3ffa27c4d9130b9f", "text": "Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection/localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.", "title": "" }, { "docid": "c26f06abb768c7b6d1a22172078aaf00", "text": "In complex conversation tasks, people react to their interlocutor’s state, such as uncertainty and engagement to improve conversation effectiveness [2]. If a conversational system reacts to a user’s state, would that lead to a better conversation experience? To test this hypothesis, we designed and implemented a dialog system that tracks and reacts to a user’s state, such as engagement, in real time. We designed and implemented a conversational job interview task based on the proposed framework. The system acts as an interviewer and reacts to user’s disengagement in real-time with positive feedback strategies designed to re-engage the user in the job interview process. Experiments suggest that users speak more while interacting with the engagement-coordinated version of the system as compared to a noncoordinated version. Users also reported the former system as being more engaging and providing a better user experience.", "title": "" }, { "docid": "156aa734b3c60681c14220a8cb51f2e8", "text": "Graph representations of large knowledge bases may comprise billions of edges. Usually built upon human-generated ontologies, several knowledge bases do not feature declared ontological rules and are far from being complete. Current rule mining approaches rely on schemata or store the graph in-memory, which can be unfeasible for large graphs. In this paper, we introduce HornConcerto, an algorithm to discover Horn clauses in large graphs without the need of a schema. Using a standard fact-based con€dence score, we can mine close Horn rules having an arbitrary body size. We show that our method can outperform existing approaches in terms of runtime and memory consumption and mine high-quality rules for the link prediction task, achieving state-of-the-art results on a widely-used benchmark. Moreover, we €nd that rules alone can perform inference signi€cantly faster than embedding-based methods and achieve accuracies on link prediction comparable to resourcedemanding approaches such as Markov Logic Networks.", "title": "" }, { "docid": "62b4c804713954abc32df277ce88a3e2", "text": "Without the proper choice of constraints, autoencoders (AEs) are capable of learning identity mapping or overcomplete representations. The features learned by this architecture may be local, isolated, or primitive. The extraction of features, however, can be controlled by judiciously enforcing some desired attributes in the form of constraints on its parameters. This article gives an overview of AEs and such constraints for data representation. It also puts AE learning in the broader context of dictionary learning.", "title": "" } ]
scidocsrr
fec5a9f5e8e9adf4083b558236256656
Green-lighting Movie Scripts : Revenue Forecasting and Risk Management
[ { "docid": "f66854fd8e3f29ae8de75fc83d6e41f5", "text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.", "title": "" } ]
[ { "docid": "0de0093ab3720901d4704bfeb7be4093", "text": "Big Data analytics can revolutionize the healthcare industry. It can improve operational efficiencies, help predict and plan responses to disease epidemics, improve the quality of monitoring of clinical trials, and optimize healthcare spending at all levels from patients to hospital systems to governments. This paper provides an overview of Big Data, applicability of it in healthcare, some of the work in progress and a future outlook on how Big Data analytics can improve overall quality in healthcare systems.", "title": "" }, { "docid": "c68ec0f721c8d8bfa27a415ba10708cf", "text": "Textures are widely used in modern computer graphics. Their size, however, is often a limiting factor. Considering the widespread adaptation of mobile virtual and augmented reality applications, efficient storage of textures has become an important factor.\n We present an approach to analyse textures of a given mesh and compute a new set of textures with the goal of improving storage efficiency and reducing memory requirements. During this process the texture coordinates of the mesh are updated as required. Textures are analysed based on the UV-coordinates of one or more meshes and deconstructed into per-triangle textures. These are further analysed to detect single coloured as well as identical per-triangle textures. Our approach aims to remove these redundancies in order to reduce the amount of memory required to store the texture data. After this analysis, the per-triangle textures are compiled into a new set of texture images of user defined size. Our algorithm aims to pack texture data as tightly as possible in order to reduce the memory requirements.", "title": "" }, { "docid": "7874a6681c45d87345197245e1e054fe", "text": "The continuous processing of streaming data has become an important aspect in many applications. Over the last years a variety of different streaming platforms has been developed and a number of open source frameworks is available for the implementation of streaming applications. In this report, we will survey the landscape of existing streaming platforms. Starting with an overview of the evolving developments in the recent past, we will discuss the requirements of modern streaming architectures and present the ways these are approached by the different frameworks.", "title": "" }, { "docid": "8decac4ff789460595664a38e7527ed6", "text": "Unit selection synthesis has shown itself to be capable of producing high quality natural sounding synthetic speech when constructed from large databases of well-recorded, well-labeled speech. However, the cost in time and expertise of building such voices is still too expensive and specialized to be able to build individual voices for everyone. The quality in unit selection synthesis is directly related to the quality and size of the database used. As we require our speech synthesizers to have more variation, style and emotion, for unit selection synthesis, much larger databases will be required. As an alternative, more recently we have started looking for parametric models for speech synthesis, that are still trained from databases of natural speech but are more robust to errors and allow for better modeling of variation. This paper presents the CLUSTERGEN synthesizer which is implemented within the Festival/FestVox voice building environment. As well as the basic technique, three methods of modeling dynamics in the signal are presented and compared: a simple point model, a basic trajectory model and a trajectory model with overlap and add.", "title": "" }, { "docid": "c99e4708a72c08569c25423efbe67775", "text": "Predicting the next activity of a running process is an important aspect of process management. Recently, artificial neural networks, so called deep-learning approaches, have been proposed to address this challenge. This demo paper describes a software application that applies the Tensorflow deep-learning framework to process prediction. The software application reads industry-standard XES files for training and presents the user with an easy-to-use graphical user interface for both training and prediction. The system provides several improvements over earlier work. This demo paper focuses on the software implementation and describes the architecture and user interface.", "title": "" }, { "docid": "08ca7be2334de477905e8766c8612c8f", "text": "a r t i c l e i n f o a b s t r a c t A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.", "title": "" }, { "docid": "fb8e6eac761229fc8c12339fb68002ed", "text": "Cerebrovascular disease results from any pathological process of the blood vessels supplying the brain. Stroke, characterised by its abrupt onset, is the third leading cause of death in humans. This rare condition in dogs is increasingly being recognised with the advent of advanced diagnostic imaging. Magnetic resonance imaging (MRI) is the first choice diagnostic tool for stroke, particularly using diffusion-weighted images and magnetic resonance angiography for ischaemic stroke and gradient echo sequences for haemorrhagic stroke. An underlying cause is not always identified in either humans or dogs. Underlying conditions that may be associated with canine stroke include hypothyroidism, neoplasia, sepsis, hypertension, parasites, vascular malformation and coagulopathy. Treatment is mainly supportive and recovery often occurs within a few weeks. The prognosis is usually good if no underlying disease is found.", "title": "" }, { "docid": "66782c46d59dd9ef225e9f3ea0b47cfe", "text": "Intraoperative vital signals convey a wealth of complex temporal information that can provide significant insights into a patient's physiological status during the surgery, as well as outcomes after the surgery. Our study involves the use of a deep recurrent neural network architecture to predict patient's outcomes after the surgery, as well as to predict the immediate changes in the intraoperative signals during the surgery. More specifically, we will use a Long Short-Term Memory (LSTM) model which is a gated deep recurrent neural network architecture. We have performed two experiments on a large intraoperative dataset of 12,036 surgeries containing information on 7 intraoperative signals including body temperature, respiratory rate, heart rate, diastolic blood pressure, systolic blood pressure, fraction of inspired O2 and end-tidal CO2. We first evaluated the capability of LSTM in predicting the immediate changes in intraoperative signals, and then we evaluated its performance on predicting each patient's length of stay outcome. Our experiments show the effectiveness of LSTM with promising results on both tasks compared to the traditional models.", "title": "" }, { "docid": "4799b4aa7e936d88fef0bb1e1f95f401", "text": "This article summarizes and reviews the literature on neonaticide, infanticide, and filicide. A literature review was conducted using the Medline database: the cue terms neonaticide, infanticide, and filicide were searched. One hundred-fifteen articles were reviewed; of these, 51 are cited in our article. We conclude that while infanticide dates back to the beginning of recorded history, little is known about what causes parents to murder their children. To this end, further research is needed to identify potential perpetrators and to prevent subsequent acts of child murder by a parent.", "title": "" }, { "docid": "852c85ecbed639ea0bfe439f69fff337", "text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.", "title": "" }, { "docid": "b752f0f474b8f275f09d446818647564", "text": "n engl j med 377;15 nejm.org October 12, 2017 4. Aysola J, Tahirovic E, Troxel AB, et al. A randomized controlled trial of opt-in versus opt-out enrollment into a diabetes behavioral intervention. Am J Health Promot 2016 October 21 (Epub ahead of print). 5. Mehta SJ, Troxel AB, Marcus N, et al. Participation rates with opt-out enrollment in a remote monitoring intervention for patients with myocardial infarction. JAMA Cardiol 2016; 1: 847-8. DOI: 10.1056/NEJMp1707991", "title": "" }, { "docid": "ec6c62f25c987446522b49840c4242d7", "text": "Have you ever been in a sauna? If yes, according to our recent survey conducted on Amazon Mechanical Turk, people who go to saunas are more likely to know that Mike Stonebraker is not a character in “The Simpsons”. While this result clearly makes no sense, recently proposed tools to automatically suggest visualizations, correlations, or perform visual data exploration, significantly increase the chance that a user makes a false discovery like this one. In this paper, we first show how current tools mislead users to consider random fluctuations as significant discoveries. We then describe our vision and early results for QUDE, a new system for automatically controlling the various risk factors during the data exploration process.", "title": "" }, { "docid": "c9582409212e6f9b194175845216b2b6", "text": "Although the amygdala complex is a brain area critical for human behavior, knowledge of its subspecialization is primarily derived from experiments in animals. We here employed methods for large-scale data mining to perform a connectivity-derived parcellation of the human amygdala based on whole-brain coactivation patterns computed for each seed voxel. Voxels within the histologically defined human amygdala were clustered into distinct groups based on their brain-wide coactivation maps. Using this approach, connectivity-based parcellation divided the amygdala into three distinct clusters that are highly consistent with earlier microstructural distinctions. Meta-analytic connectivity modelling then revealed the derived clusters' brain-wide connectivity patterns, while meta-data profiling allowed their functional characterization. These analyses revealed that the amygdala's laterobasal nuclei group was associated with coordinating high-level sensory input, whereas its centromedial nuclei group was linked to mediating attentional, vegetative, and motor responses. The often-neglected superficial nuclei group emerged as particularly sensitive to olfactory and probably social information processing. The results of this model-free approach support the concordance of structural, connectional, and functional organization in the human amygdala and point to the importance of acknowledging the heterogeneity of this region in neuroimaging research.", "title": "" }, { "docid": "2c7bafac9d4c4fedc43982bd53c99228", "text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.", "title": "" }, { "docid": "f65c3e60dbf409fa2c6e58046aad1e1c", "text": "The gut microbiota is essential for the development and regulation of the immune system and the metabolism of the host. Germ-free animals have altered immunity with increased susceptibility to immunologic diseases and show metabolic alterations. Here, we focus on two of the major immune-mediated microbiota-influenced components that signal far beyond their local environment. First, the activation or suppression of the toll-like receptors (TLRs) by microbial signals can dictate the tone of the immune response, and they are implicated in regulation of the energy homeostasis. Second, we discuss the intestinal mucosal surface is an immunologic component that protects the host from pathogenic invasion, is tightly regulated with regard to its permeability and can influence the systemic energy balance. The short chain fatty acids are a group of molecules that can both modulate the intestinal barrier and escape the gut to influence systemic health. As modulators of the immune response, the microbiota-derived signals influence functions of distant organs and can change susceptibility to metabolic diseases.", "title": "" }, { "docid": "a8e3fd9ddfdb1eaea980246489579812", "text": "With modern computer graphics, we can generate enormous amounts of 3D scene data. It is now possible to capture high-quality 3D representations of large real-world environments. Large shape and scene databases, such as the Trimble 3D Warehouse, are publicly accessible and constantly growing. Unfortunately, while a great amount of 3D content exists, most of it is detached from the semantics and functionality of the objects it represents. In this paper, we present a method to establish a correlation between the geometry and the functionality of 3D environments. Using RGB-D sensors, we capture dense 3D reconstructions of real-world scenes, and observe and track people as they interact with the environment. With these observations, we train a classifier which can transfer interaction knowledge to unobserved 3D scenes. We predict a likelihood of a given action taking place over all locations in a 3D environment and refer to this representation as an action map over the scene. We demonstrate prediction of action maps in both 3D scans and virtual scenes. We evaluate our predictions against ground truth annotations by people, and present an approach for characterizing 3D scenes by functional similarity using action maps.", "title": "" }, { "docid": "87e732240f00b112bf2bb44af0ff8ca1", "text": "Spoken Dialogue Systems (SDS) are man-machine interfaces which use natural language as the medium of interaction. Dialogue corpora collection for the purpose of training and evaluating dialogue systems is an expensive process. User simulators aim at simulating human users in order to generate synthetic data. Existing methods for user simulation mainly focus on generating data with the same statistical consistency as in some reference dialogue corpus. This paper outlines a novel approach for user simulation based on Inverse Reinforcement Learning (IRL). The task of building the user simulator is perceived as a task of imitation learning.", "title": "" }, { "docid": "32f6db1bf35da397cd61d744a789d49c", "text": "Mushroom poisoning is the main cause of mortality in food poisoning incidents in China. Although some responsible mushroom species have been identified, some were identified inaccuratly. This study investigated and analyzed 102 mushroom poisoning cases in southern China from 1994 to 2012, which involved 852 patients and 183 deaths, with an overall mortality of 21.48 %. The results showed that 85.3 % of poisoning cases occurred from June to September, and involved 16 species of poisonous mushroom: Amanita species (A. fuliginea, A. exitialis, A. subjunquillea var. alba, A. cf. pseudoporphyria, A. kotohiraensis, A. neoovoidea, A. gymnopus), Galerina sulciceps, Psilocybe samuiensis, Russula subnigricans, R. senecis, R. japonica, Chlorophyllum molybdites, Paxillus involutus, Leucocoprinus cepaestipes and Pulveroboletus ravenelii. Six species (A. subjunquillea var. alba, A. cf. pseudoporphyria, A. gymnopus, R. japonica, Psilocybe samuiensis and Paxillus involutus) are reported for the first time in poisoning reports from China. Psilocybe samuiensis is a newly recorded species in China. The genus Amanita was responsible for 70.49 % of fatalities; the main lethal species were A. fuliginea and A. exitialis. Russula subnigricans caused 24.59 % of fatalities, and five species showed mortality >20 % (A. fuliginea, A. exitialis, A. subjunquillea var. alba, R. subnigricans and Paxillus involutus). Mushroom poisoning symptoms were classified from among the reported clinical symptoms. Seven types of mushroom poisoning symptoms were identified for clinical diagnosis and treatment in China, including gastroenteritis, acute liver failure, acute renal failure, psychoneurological disorder, hemolysis, rhabdomyolysis and photosensitive dermatitis.", "title": "" }, { "docid": "a6fc1c70b4bab666d5d580214fa3e09f", "text": "Software designs decay as systems, uses, and operational environments evolve. Decay can involve the design patterns used to structure a system. Classes that participate in design pattern realizations accumulate grime—non-pattern-related code. Design pattern realizations can also rot, when changes break the structural or functional integrity of a design pattern. Design pattern rot can prevent a pattern realization from fulfilling its responsibilities, and thus represents a fault. Grime buildup does not break the structural integrity of a pattern but can reduce system testability and adaptability. This research examined the extent to which software designs actually decay, rot, and accumulate grime by studying the aging of design patterns in three successful object-oriented systems. We generated UML models from the three implementations and employed a multiple case study methodology to analyze the evolution of the designs. We found no evidence of design pattern rot in these systems. However, we found considerable evidence of pattern decay due to grime. Dependencies between design pattern components increased without regard for pattern intent, reducing pattern modularity, and decreasing testability and adaptability. The study of decay and grime showed that the grime that builds up around design patterns is mostly due to increases in coupling.", "title": "" }, { "docid": "998bf65b2e95db90eb9fab8e13b47ff6", "text": "Recently, deep neural networks (DNNs) have been regarded as the state-of-the-art classification methods in a wide range of applications, especially in image classification. Despite the success, the huge number of parameters blocks its deployment to situations with light computing resources. Researchers resort to the redundancy in the weights of DNNs and attempt to find how fewer parameters can be chosen while preserving the accuracy at the same time. Although several promising results have been shown along this research line, most existing methods either fail to significantly compress a well-trained deep network or require a heavy fine-tuning process for the compressed network to regain the original performance. In this paper, we propose the Block Term networks (BT-nets) in which the commonly used fully-connected layers (FC-layers) are replaced with block term layers (BT-layers). In BT-layers, the inputs and the outputs are reshaped into two low-dimensional high-order tensors, then block-term decomposition is applied as tensor operators to connect them. We conduct extensive experiments on benchmark datasets to demonstrate that BT-layers can achieve a very large compression ratio on the number of parameters while preserving the representation power of the original FC-layers as much as possible. Specifically, we can get a higher performance while requiring fewer parameters compared with the tensor train method.", "title": "" } ]
scidocsrr
375739927ac2c48bd2575c5fb608bfaf
Aligned Cluster Analysis for temporal segmentation of human motion
[ { "docid": "ae58bc6ced30bf2c855473541840ec4d", "text": "Techniques from the image and signal processing domain can be successfully applied to designing, modifying, and adapting animated motion. For this purpose, we introduce multiresolution motion filtering, multitarget motion interpolation with dynamic timewarping, waveshaping and motion displacement mapping. The techniques are well-suited for reuse and adaptation of existing motion data such as joint angles, joint coordinates or higher level motion parameters of articulated figures with many degrees of freedom. Existing motions can be modified and combined interactively and at a higher level of abstraction than conventional systems support. This general approach is thus complementary to keyframing, motion capture, and procedural animation.", "title": "" } ]
[ { "docid": "d90add899632bab1c5c2637c7080f717", "text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.", "title": "" }, { "docid": "3580c05a6564e7e09c6577026da69fe9", "text": "Inpainting based image compression approaches, especially linear and non-linear diffusion models, are an active research topic for lossy image compression. The major challenge in these compression models is to find a small set of descriptive supporting points, which allow for an accurate reconstruction of the original image. It turns out in practice that this is a challenging problem even for the simplest Laplacian interpolation model. In this paper, we revisit the Laplacian interpolation compression model and introduce two fast algorithms, namely successive preconditioning primal dual algorithm and the recently proposed iPiano algorithm, to solve this problem efficiently. Furthermore, we extend the Laplacian interpolation based compression model to a more general form, which is based on principles from bi-level optimization. We investigate two different variants of the Laplacian model, namely biharmonic interpolation and smoothed Total Variation regularization. Our numerical results show that significant improvements can be obtained from the biharmonic interpolation model, and it can recover an image with very high quality from only 5% pixels.", "title": "" }, { "docid": "a7607444b58f0e86000c7f2d09551fcc", "text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.", "title": "" }, { "docid": "c3c3add0c42f3b98962c4682a72b1865", "text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.", "title": "" }, { "docid": "b109db8e315d904901021224745c9e26", "text": "IP lookup and routing table update affect the speed at which a router forwards packets. This study proposes a new data structure for dynamic router tables used in IP lookup and update, called the Multi-inherited Search Tree (MIST). Partitioning each prefix according to an index value and removing the relationships among prefixes enables performing IP lookup operations efficiently. Because a prefix trie is used as a substructure, memory can be consumed and dynamic router-table operations can be performed efficiently. Experiments using real IPv4 routing databases indicated that the MIST uses memory efficiently and performs lookup, insert, and delete operations effectively.", "title": "" }, { "docid": "41aa05455471ecd660599f4ec285ff29", "text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.", "title": "" }, { "docid": "d500b28961f2346f1caac6a11fe9b2bd", "text": "In the late 19th century, DeWecker initially described the use of optic nerve sheath fenestration (ONSF) in a case of neuroretinitis at a time when little was known about the pathophysiology of optic nerve swelling. The procedure lay relatively dormant until renewed interest arose from studies investigating the axonal basis of papilledema and its resolution with ONSF. This surgery has been utilized in a variety of other optic nerve conditions not related to papilledema, with largely disappointing results, including the Ischemic Optic Neuropathy Decompression Trial. Although prospective clinical trials have not been performed to compare the efficacy of ONSF to other treatment modalities like shunting procedures, many studies have confirmed that ONSF can play a significant role in preventing vision loss in conditions where intracranial pressure (ICP) is elevated, like idiopathic intracranial hypertension (IIH).", "title": "" }, { "docid": "1653caa3ac10c831eddd6dfdbffa4725", "text": "To control and price negative externalities in passenger road transport, we develop an innovative and integrated computational agent based economics (ACE) model to simulate a market oriented “cap” and trade system. (i) First, there is a computational assessment of a digitized road network model of the real world congestion hot spot to determine the “cap” of the system in terms of vehicle volumes at which traffic efficiency deteriorates and the environmental externalities take off exponentially. (ii) Road users submit bids with the market clearing price at the fixed “cap” supply of travel slots in a given time slice (peak hour) being determined by an electronic sealed bid uniform price Dutch auction. (iii) Cross-sectional demand data on car users who traverse the cordon area is used to model and calibrate the heterogeneous bid submission behaviour in order to construct the inverse demand function and demand elasticities. (iv) The willingness to pay approach with heterogeneous value of time is contrasted with the generalized cost approach to pricing congestion with homogenous value of travel time. JEL Classification: R41, R48, C99, D44, H41", "title": "" }, { "docid": "049c6062613d0829cf39cbfe4aedca7a", "text": "Deep neural networks (DNN) are widely used in many applications. However, their deployment on edge devices has been difficult because they are resource hungry. Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit. We propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions. Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation. These additions are based on linear operations that are easily implementable into the binary training framework. We show experimental results on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network. On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively.", "title": "" }, { "docid": "659f362b1f30c32cdaca90e3141596fb", "text": "Purpose – The paper aims to focus on so-called NoSQL databases in the context of cloud computing. Design/methodology/approach – Architectures and basic features of these databases are studied, particularly their horizontal scalability and concurrency model, that is mostly weaker than ACID transactions in relational SQL-like database systems. Findings – Some characteristics like a data model and querying capabilities of NoSQL databases are discussed in more detail. Originality/value – The paper shows vary different data models and query possibilities in a common terminology enabling comparison and categorization of NoSQL databases.", "title": "" }, { "docid": "316f7f744db9f8f66c9f4d5b69e7431d", "text": "We propose automated sport game models as a novel technical means for the analysis of team sport games. The basic idea is that automated sport game models are based on a conceptualization of key notions in such games and probabilistically derived from a set of previous games. In contrast to existing approaches, automated sport game models provide an analysis that is sensitive to their context and go beyond simple statistical aggregations allowing objective, transparent and meaningful concept definitions. Based on automatically gathered spatio-temporal data by a computer vision system, a model hierarchy is built bottom up, where context-sensitive concepts are instantiated by the application of machine learning techniques. We describe the current state of implementation of the ASPOGAMO system including its computer vision subsystem that realizes the idea of automated sport game models. Their usage is exemplified with an analysis of the final of the soccer World Cup 2006.", "title": "" }, { "docid": "aee91ee5d4cbf51d9ce1344be4e5448c", "text": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.", "title": "" }, { "docid": "8c07982729ca439c8e346cbe018a7198", "text": "The need for diversification manifests in various recommendation use cases. In this work, we propose a novel approach to diversifying a list of recommended items, which maximizes the utility of the items subject to the increase in their diversity. From a technical perspective, the problem can be viewed as maximization of a modular function on the polytope of a submodular function, which can be solved optimally by a greedy method. We evaluate our approach in an offline analysis, which incorporates a number of baselines and metrics, and in two online user studies. In all the experiments, our method outperforms the baseline methods.", "title": "" }, { "docid": "34fa7e6d5d4f1ab124e3f12462e92805", "text": "Natural image modeling plays a key role in many vision problems such as image denoising. Image priors are widely used to regularize the denoising process, which is an ill-posed inverse problem. One category of denoising methods exploit the priors (e.g., TV, sparsity) learned from external clean images to reconstruct the given noisy image, while another category of methods exploit the internal prior (e.g., self-similarity) to reconstruct the latent image. Though the internal prior based methods have achieved impressive denoising results, the improvement of visual quality will become very difficult with the increase of noise level. In this paper, we propose to exploit image external patch prior and internal self-similarity prior jointly, and develop an external patch prior guided internal clustering algorithm for image denoising. It is known that natural image patches form multiple subspaces. By utilizing Gaussian mixture models (GMMs) learning, image similar patches can be clustered and the subspaces can be learned. The learned GMMs from clean images are then used to guide the clustering of noisy-patches of the input noisy images, followed by a low-rank approximation process to estimate the latent subspace for image recovery. Numerical experiments show that the proposed method outperforms many state-of-the-art denoising algorithms such as BM3D and WNNM.", "title": "" }, { "docid": "affe52d4bb21526596ba5c131fb871c8", "text": "Developing large scale software projects involves huge efforts at every stage of the software development life cycle (SDLC). This led researchers and practitioners to develop software processes and methodologies that will assist software developers and improve their operations. Software processes evolved and took multiple approaches to address the different issues of the SDLC. Recently big data analytics applications (BDAA) are in demand as more and more data is collected and stakeholders need effective and efficient software to process them. The goal is not just to be able to process big data, but also arrive at useful conclusions that are accurate and timely. Considering the distinctive characteristics of big data and the available infrastructures, tools and development models, we need to create a systematic approach to the SDLC activities for BDAA development. In this paper, we rely on our earlier work identifying the characteristic and requirements of BDAA and use that to propose appropriate models for their development process. It is necessary to carefully examine this domain and adopt the software processes that best serve the developers and is flexible enough to address the different characteristics of such applications.", "title": "" }, { "docid": "eae9f650b00ecc92377b787c1e0da140", "text": "Highly reliable data from a sample of 888 white US children, measured serially in a single study, have been used to provide reference data for head circumference from birth to 18 years of age. The present data differ little from those already available for the age range from birth to 36 months of age, but they are considerably higher (about 0.5 cm) at older ages for boys and tend to be slightly higher for girls. These new reference data are smoother across age than those used currently for screening and evaluation. Percentiles for 6-month increments from birth to 6 years have been provided.", "title": "" }, { "docid": "c9ad1daa4ee0d900c1a2aa9838eb9918", "text": "A central question in human development is how young children gain knowledge so fast. We propose that analogical generalization drives much of this early learning and allows children to generate new abstractions from experience. In this paper, we review evidence for analogical generalization in both children and adults. We discuss how analogical processes interact with the child's changing knowledge base to predict the course of learning, from conservative to domain-general understanding. This line of research leads to challenges to existing assumptions about learning. It shows that (a) it is not enough to consider the distribution of examples given to learners; one must consider the processes learners are applying; (b) contrary to the general assumption, maximizing variability is not always the best route for maximizing generalization and transfer.", "title": "" }, { "docid": "86c3aefe7ab3fa2178da219f57bedf81", "text": "We present a model constructed for a large consumer products company to assess their vulnerability to disruption risk and quantify its impact on customer service. Risk profiles for the locations and connections in the supply chain are developed using Monte Carlo simulation, and the flow of material and network interactions are modeled using discrete-event simulation. Capturing both the risk profiles and material flow with simulation allows for a clear view of the impact of disruptions on the system. We also model various strategies for coping with the risk in the system in order to maintain product availability to the customer. We discuss the dynamic nature of risk in the network and the importance of proactive planning to mitigate and recover from disruptions.", "title": "" }, { "docid": "2185097978553d5030252ffa9240fb3c", "text": "The concept of celebrity culture remains remarkably undertheorized in the literature, and it is precisely this gap that this article aims to begin filling in. Starting with media culture definitions, celebrity culture is conceptualized as collections of sense-making practices whose main resources of meaning are celebrity. Consequently, celebrity cultures are necessarily plural. This approach enables us to focus on the spatial differentiation between (sub)national celebrity cultures, for which the Flemish case is taken as a central example. We gain a better understanding of this differentiation by adopting a translocal frame on culture and by focusing on the construction of celebrity cultures through the ‘us and them’ binary and communities. Finally, it is also suggested that what is termed cultural working memory improves our understanding of the remembering and forgetting of actual celebrities, as opposed to more historical figures captured by concepts such as cultural memory.", "title": "" }, { "docid": "ab98f6dc31d080abdb06bb9b4dba798e", "text": "In TEFL, it is often stated that communication presupposes comprehension. The main purpose of readability studies is thus to measure the comprehensibility of a piece of writing. In this regard, different readability measures were initially devised to help educators select passages suitable for both children and adults. However, readability formulas can certainly be extremely helpful in the realm of EFL reading. They were originally designed to assess the suitability of books for students at particular grade levels or ages. Nevertheless, they can be used as basic tools in determining certain crucial EFL text-characteristics instrumental in the skill of reading and its related issues. The aim of the present paper is to familiarize the readers with the most frequently used readability formulas as well as the pros and cons views toward the use of such formulas. Of course, this part mostly illustrates studies done on readability formulas with the results obtained. The main objective of this part is to help readers to become familiar with the background of the formulas, the theory on which they stand, what they are good for and what they are not with regard to a number of studies cited in this section.", "title": "" } ]
scidocsrr
e4a93515c1075d24a6d77e98137f6538
Results of the WNUT16 Named Entity Recognition Shared Task
[ { "docid": "f51f962753afcf26ed988cca7c85a439", "text": "This paper describes our system used in the 2 Workshop on Noisy User-generated Text (WNUT) shared task for Named Entity Recognition (NER) in Twitter, in conjunction with Coling 2016. Our system is based on supervised machine learning by applying Conditional Random Fields (CRF) to train two classifiers for two different evaluations. The first evaluation aims at predicting the 10 fine-grained types of named entities; while the second evaluation aims at predicting no type of named entities. The experimental results show that our method has significantly improved Twitter NER performance.", "title": "" } ]
[ { "docid": "43efacf740f920fb621cf870cb9102ce", "text": "Vehicular Ad hoc Network (VANETs) help improve efficiency of security applications and road safety. Using the information exchanged between vehicles, the latter can warn drivers about dangerous situations. Detection and warning about such situations require reliable communication between vehicles. In fact, the IEEE 802.11p (WAVE: Wireless Access in the Vehicular Environment) was proposed to support the rapid exchange of data between the vehicles. Several Medium Access Control (MAC) protocols were also introduced for safety application VANET. In this paper, we present the different MAC basic protocols in VANET. We used simulation to compare and analyze their performances.", "title": "" }, { "docid": "16f58cda028e7c542074832be620ec53", "text": "A general circuit configuration for cross-coupled wideband bandstop filters is proposed. The distinct filtering characteristics of this new type of transmission line filter are investigated theoretically and experimentally. It is shown that a ripple stopband can be created, leading to a quasi-elliptic function response that enhances the rejection bandwidth. A demonstrator with approximately 80% fractional bandwidth at a mid-stopband frequency of 4 GHz is developed and presented. The proposed filter is successfully realized in theory and verified by full-wave electromagnetic simulation and the experiment. Theoretical, simulated, and measured results are in excellent agreement.", "title": "" }, { "docid": "42c6ec7e27bc1de6beceb24d52b7216c", "text": "Internet of Things (IoT) refers to the expansion of Internet technologies to include wireless sensor networks (WSNs) and smart objects by extensive interfacing of exclusively identifiable, distributed communication devices. Due to the close connection with the physical world, it is an important requirement for IoT technology to be self-secure in terms of a standard information security model components. Autonomic security should be considered as a critical priority and careful provisions must be taken in the design of dynamic techniques, architectures and self-sufficient frameworks for future IoT. Over the years, many researchers have proposed threat mitigation approaches for IoT and WSNs. This survey considers specific approaches requiring minimal human intervention and discusses them in relation to self-security. This survey addresses and brings together a broad range of ideas linked together by IoT, autonomy and security. More particularly, this paper looks at threat mitigation approaches in IoT using an autonomic taxonomy and finally sets down future directions. & 2014 Published by Elsevier Ltd.", "title": "" }, { "docid": "b5dc44ac0a590f926aa7bbae501db8f3", "text": "Multiple valued logic (MVL) circuits are particularly attractive for nanoscale implementation as advantages in information density and operating speed can be harvested using emerging technologies. In this paper, a new family of MVL gates is proposed for implementation using carbon nanotube field-effect transistors (CNTFETs). The proposed designs use pseudo N-type CNTFETs and no resistor is utilized for their operation. This approach exploits threshold voltage control of the P-type and N-type transistors, while ensuring correct MVL operation for both ternary and quaternary logic gates. This paper provides a detailed assessment of several figures of merit, such as static power consumption, switching power consumption, propagation delay and the power-delay product (PDP). Compared with resistor-loaded designs, the proposed pseudo-NCNTFET MVL gates show advantages in circuit area, power consumption and energy efficiency, while still incurring a comparable propagation delay. Compared to a complementary logic family, the pseudo-NCNTFET MVL logic family requires a smaller circuit area with a similar propagation delay on average, albeit with a larger PDP and static power consumption. A design methodology and a discussion of issues related to leakage and yield are also provided for the proposed MVL logic family.", "title": "" }, { "docid": "f00b9a311fb8b14100465c187c9e4659", "text": "We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.", "title": "" }, { "docid": "7dc5e63ddbb8ec509101299924093c8b", "text": "The task of aspect and opinion terms co-extraction aims to explicitly extract aspect terms describing features of an entity and opinion terms expressing emotions from user-generated texts. To achieve this task, one effective approach is to exploit relations between aspect terms and opinion terms by parsing syntactic structure for each sentence. However, this approach requires expensive effort for parsing and highly depends on the quality of the parsing results. In this paper, we offer a novel deep learning model, named coupled multi-layer attentions. The proposed model provides an end-to-end solution and does not require any parsers or other linguistic resources for preprocessing. Specifically, the proposed model is a multilayer attention network, where each layer consists of a couple of attentions with tensor operators. One attention is for extracting aspect terms, while the other is for extracting opinion terms. They are learned interactively to dually propagate information between aspect terms and opinion terms. Through multiple layers, the model can further exploit indirect relations between terms for more precise information extraction. Experimental results on three benchmark datasets in SemEval Challenge 2014 and 2015 show that our model achieves stateof-the-art performances compared with several baselines.", "title": "" }, { "docid": "c636b8c942728fd7883f74b12eba5ac9", "text": "In this paper we propose a novel approach to detect and reconstruct transparent objects. This approach makes use of the fact that many transparent objects, especially the ones consisting of usual glass, absorb light in certain wavelengths [1]. Given a controlled illumination, this absorption is measurable in the intensity response by comparison to the background. We show the usage of a standard infrared emitter and the intensity sensor of a time of flight (ToF) camera to reconstruct the structure given we have a second view point. The structure can not be measured by the usual 3D measurements of the ToF camera. We take advantage of this fact by deriving this internal sensory contradiction from two ToF images and reconstruct an approximated surface of the original transparent object. Therefor we are using a perspectively invariant matching in the intensity channels from the first to the second view of initially acquired candidates. For each matched pixel in the first view a 3D movement can be predicted given their original 3D measurement and the known distance to the second camera position. If their line of sight did not pass a transparent object or suffered any other major defect, this prediction will highly correspond to the actual measured 3D points of the second view. Otherwise, if a detectable error occurs, we approximate a more exact point to point matching and reconstruct the original shape by triangulating the points in the stereo setup. We tested our approach using a mobile platform with one Swissranger SR4k. As this platform is mobile, we were able to create a stereo setup by moving it. Our results show a detection of transparent objects on tables while simultaneously identifying opaque objects that also existed in the test setup. The viability of our results is demonstrated by a successful automated manipulation of the respective transparent object.", "title": "" }, { "docid": "13eda203c8db0621a1d96d2f19e2fb40", "text": "We present a framework for constructing a specific type of knowledge graph, a concept map from textbooks. Using Wikipedia, we derive prerequisite relations among these concepts. A traditional approach for concept map extraction consists of two sub-problems: key concept extraction and concept relationship identification. Previous work for the most part had considered these two sub-problems independently. We propose a framework that jointly optimizes these sub-problems and investigates methods that identify concept relationships. Experiments on concept maps that are manually extracted in six educational areas (computer networks, macroeconomics, precalculus, databases, physics, and geometry) show that our model outperforms supervised learning baselines that solve the two sub-problems separately. Moreover, we observe that incorporating textbook information helps with concept map extraction.", "title": "" }, { "docid": "8eb3b8fb9420cc27ec17aa884531fa83", "text": "Participation has emerged as an appropriate approach for enhancing natural resources management. However, despite long experimentation with participation, there are still possibilities for improvement in designing a process of stakeholder involvement by addressing stakeholder heterogeneity and the complexity of decision-making processes. This paper provides a state-of-the-art overview of methods. It proposes a comprehensive framework to implement stakeholder participation in environmental projects, from stakeholder identification to evaluation. For each process within this framework, techniques are reviewed and practical tools proposed. The aim of this paper is to establish methods to determine who should participate, when and how. The application of this framework to one river restoration case study in Switzerland will illustrate its strengths and weaknesses.", "title": "" }, { "docid": "cc29e52014c2c5aaf0d26c8a1fc0dcff", "text": "GUNWOONG LEE is a Ph.D. candidate of Information Systems at the W. P. Carey School of Business, Arizona State University. His research interests include digital content management in mobile platforms, information and communication technology for development, and technologydriven healthcare innovations. His research has appeared in major conferences and journals including Decision Support Systems, International Conference on Information Systems, and America Conference on Information Systems. He has consulting experience with Korea Association of Game Industry, Korea Stock Exchange, and other companies and government agencies.", "title": "" }, { "docid": "89b5d821fcb5f9a91612b4936b52ad83", "text": "We investigate the benefits of evaluating Mel-frequency cepstral coefficients (MFCCs) over several time scales in the context of automatic musical instrument identification for signals that are monophonic but derived from real musical settings. We define several sets of features derived from MFCCs computed using multiple time resolutions, and compare their performance against other features that are computed using a single time resolution, such as MFCCs, and derivatives of MFCCs. We find that in each task - pair-wise discrimination, and one vs. all classification - the features involving multiscale decompositions perform significantly better than features computed using a single time-resolution.", "title": "" }, { "docid": "580b5dfe7d17db560d5efd2fd975a284", "text": "Structured knowledge about concepts plays an increasingly important role in areas such as information retrieval. The available ontologies and knowledge graphs that encode such conceptual knowledge, however, are inevitably incomplete. This observation has led to a number of methods that aim to automatically complete existing knowledge bases. Unfortunately, most existing approaches rely on black box models, e.g. formulated as global optimization problems, which makes it difficult to support the underlying reasoning process with intuitive explanations. In this paper, we propose a new method for knowledge base completion, which uses interpretable conceptual space representations and an explicit model for inductive inference that is closer to human forms of commonsense reasoning. Moreover, by separating the task of representation learning from inductive reasoning, our method is easier to apply in a wider variety of contexts. Finally, unlike optimization based approaches, our method can naturally be applied in settings where various logical constraints between the extensions of concepts need to be taken into account.", "title": "" }, { "docid": "d3a8457c4c65652855e734556652c6be", "text": "We consider a supervised learning problem in which data are revealed sequentially and the goal is to determine what will next be revealed. In the context of this problem, algorithms based on association rules have a distinct advantage over classical statistical and machine learning methods; however, there has not previously been a theoretical foundation established for using association rules in supervised learning. We present two simple algorithms that incorporate association rules, and provide generalization guarantees on these algorithms based on algorithmic stability analysis from statistical learning theory. We include a discussion of the strict minimum support threshold often used in association rule mining, and introduce an “adjusted confidence” measure that provides a weaker minimum support condition that has advantages over the strict minimum support. The paper brings together ideas from statistical learning theory, association rule mining and Bayesian analysis.", "title": "" }, { "docid": "2f7a15b3d922d9a1d03a6851be5f6622", "text": "The clinical relevance of T cells in the control of a diverse set of human cancers is now beyond doubt. However, the nature of the antigens that allow the immune system to distinguish cancer cells from noncancer cells has long remained obscure. Recent technological innovations have made it possible to dissect the immune response to patient-specific neoantigens that arise as a consequence of tumor-specific mutations, and emerging data suggest that recognition of such neoantigens is a major factor in the activity of clinical immunotherapies. These observations indicate that neoantigen load may form a biomarker in cancer immunotherapy and provide an incentive for the development of novel therapeutic approaches that selectively enhance T cell reactivity against this class of antigens.", "title": "" }, { "docid": "b783e3a8b9aaec7114603bafffcb5bfd", "text": "Acknowledgements This paper has benefited from conversations and collaborations with colleagues, including most notably Stefan Dercon, Cheryl Doss, and Chris Udry. None of them has read this manuscript, however, and they are not responsible for the views expressed here. Steve Wiggins provided critical comments on the first draft of the document and persuaded me to rethink a number of points. The aim of the Natural Resources Group is to build partnerships, capacity and wise decision-making for fair and sustainable use of natural resources. Our priority in pursuing this purpose is on local control and management of natural resources and other ecosystems. The Institute of Development Studies (IDS) is a leading global Institution for international development research, teaching and learning, and impact and communications, based at the University of Sussex. Its vision is a world in which poverty does not exist, social justice prevails and sustainable economic growth is focused on improving human wellbeing. The Overseas Development Institute (ODI) is a leading independent think tank on international development and humanitarian issues. Its mission is to inspire and inform policy and practice which lead to the reduction of poverty, the alleviation of suffering and the achievement of sustainable livelihoods. Smallholder agriculture has long served as the dominant economic activity for people in sub-Saharan Africa, and it will remain enormously important for the foreseeable future. But the size of the sector does not necessarily imply that investments in the smallholder sector will yield high social benefits in comparison to other possible uses of development resources. Large changes could potentially affect the viability of smallholder systems, emanating from shifts in technology, markets, climate and the global environment. The priorities for development policy will vary across and within countries due to the highly heterogeneous nature of the smallholder sector.", "title": "" }, { "docid": "69d47c319821a768788282d84fa1f0f1", "text": "The high price of incoming international calls is a common method of subsidizing telephony infrastructure in the developing world. Accordingly, international telephone system interconnects are regulated to ensure call quality and accurate billing. High call tariffs create a strong incentive to evade such interconnects and deliver costly international calls illicitly. Specifically, adversaries use VoIP-GSM gateways informally known as “simboxes” to receive incoming calls over wired data connections and deliver them into a cellular voice network through a local call that appears to originate from a customer’s phone. This practice is not only extremely profitable for simboxers, but also dramatically degrades network experience for legitimate customers, violates telecommunications laws in many countries, and results in significant revenue loss. In this paper, we present a passive detection technique for combating simboxes at a cellular base station. Our system relies on the raw voice data received by the tower during a call to distinguish errors in GSM transmission from the distinct audio artifacts caused by delivering the call over a VoIP link. Our experiments demonstrate that this approach is highly effective, and can detect 87% of real simbox calls in only 30 seconds of audio with no false positives. Moreover, we demonstrate that evading our detection across multiple calls is only possible with a small probability. In so doing, we demonstrate that fraud that degrades network quality and costs telecommunications billions of dollars annually can easily be detected and counteracted in real time.", "title": "" }, { "docid": "ac59e4ad40892da3d11d18eb40c09da8", "text": "Recent advances in consumer-grade depth sensors have enable the collection of massive real-world 3D objects. Together with the rise of deep learning, it brings great potential for large-scale 3D object retrieval. In this challenge, we aim to study and evaluate the performance of 3D object retrieval algorithms with RGB-D data. To support the study, we expanded the previous ObjectNN dataset [HTT∗17] to include RGB-D objects from both SceneNN [HPN∗16] and ScanNet [DCS∗17], with the CAD models from ShapeNetSem [CFG∗15]. Evaluation results show that while the RGB-D to CAD retrieval problem is indeed challenging due to incomplete RGB-D reconstructions, it can be addressed to a certain extent using deep learning techniques trained on multi-view 2D images or 3D point clouds. The best method in this track has a 82% retrieval accuracy.", "title": "" }, { "docid": "c14c37eb74a994c0799d39ab53abf311", "text": "All learners learn best when they are motivated; so do adults. Hence, the way to ensure success of students in higher education is first to know what motivates and sustains them in the learning process. Based on a study of 203 university students, this paper presents eight top most motivating factors for adult learners in higher education. These include quality of instruction; quality of curriculum; relevance and pragmatism; interactive classrooms and effective management practices; progressive assessment and timely feedback; self-directedness; conducive learning environment; and effective academic advising practices. The study concludes that these eight factors are critical to eliciting or enhancing the will power in students in higher education toward successful learning. The implications for practice and further research are also discussed.", "title": "" }, { "docid": "fba1a1296d8f3e22248e45cbe33263b5", "text": "Wi-Fi has become the de facto wireless technology for achieving short- to medium-range device connectivity. While early attempts to secure this technology have been proved inadequate in several respects, the current more robust security amendments will inevitably get outperformed in the future, too. In any case, several security vulnerabilities have been spotted in virtually any version of the protocol rendering the integration of external protection mechanisms a necessity. In this context, the contribution of this paper is multifold. First, it gathers, categorizes, thoroughly evaluates the most popular attacks on 802.11 and analyzes their signatures. Second, it offers a publicly available dataset containing a rich blend of normal and attack traffic against 802.11 networks. A quite extensive first-hand evaluation of this dataset using several machine learning algorithms and data features is also provided. Given that to the best of our knowledge the literature lacks such a rich and well-tailored dataset, it is anticipated that the results of the work at hand will offer a solid basis for intrusion detection in the current as well as next-generation wireless networks.", "title": "" }, { "docid": "142c5598f0a8b95b5d4f3e5656a857a9", "text": "Flavanols from chocolate appear to increase nitric oxide bioavailability, protect vascular endothelium, and decrease cardiovascular disease (CVD) risk factors. We sought to test the effect of flavanol-rich dark chocolate (FRDC) on endothelial function, insulin sensitivity, beta-cell function, and blood pressure (BP) in hypertensive patients with impaired glucose tolerance (IGT). After a run-in phase, 19 hypertensives with IGT (11 males, 8 females; 44.8 +/- 8.0 y) were randomized to receive isocalorically either FRDC or flavanol-free white chocolate (FFWC) at 100 g/d for 15 d. After a wash-out period, patients were switched to the other treatment. Clinical and 24-h ambulatory BP was determined by sphygmometry and oscillometry, respectively, flow-mediated dilation (FMD), oral glucose tolerance test, serum cholesterol and C-reactive protein, and plasma homocysteine were evaluated after each treatment phase. FRDC but not FFWC ingestion decreased insulin resistance (homeostasis model assessment of insulin resistance; P < 0.0001) and increased insulin sensitivity (quantitative insulin sensitivity check index, insulin sensitivity index (ISI), ISI(0); P < 0.05) and beta-cell function (corrected insulin response CIR(120); P = 0.035). Systolic (S) and diastolic (D) BP decreased (P < 0.0001) after FRDC (SBP, -3.82 +/- 2.40 mm Hg; DBP, -3.92 +/- 1.98 mm Hg; 24-h SBP, -4.52 +/- 3.94 mm Hg; 24-h DBP, -4.17 +/- 3.29 mm Hg) but not after FFWC. Further, FRDC increased FMD (P < 0.0001) and decreased total cholesterol (-6.5%; P < 0.0001), and LDL cholesterol (-7.5%; P < 0.0001). Changes in insulin sensitivity (Delta ISI - Delta FMD: r = 0.510, P = 0.001; Delta QUICKI - Delta FMD: r = 0.502, P = 0.001) and beta-cell function (Delta CIR(120) - Delta FMD: r = 0.400, P = 0.012) were directly correlated with increases in FMD and inversely correlated with decreases in BP (Delta ISI - Delta 24-h SBP: r = -0.368, P = 0.022; Delta ISI - Delta 24-h DBP r = -0.384, P = 0.017). Thus, FRDC ameliorated insulin sensitivity and beta-cell function, decreased BP, and increased FMD in IGT hypertensive patients. These findings suggest flavanol-rich, low-energy cocoa food products may have a positive impact on CVD risk factors.", "title": "" } ]
scidocsrr
f92af42a16e4b181d528f7067b0752f2
PCA vs. ICA: A Comparison on the FERET Data Set
[ { "docid": "8b948819efed14853dcfeeabdb28c1be", "text": "We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in \"blind\" signal processing.", "title": "" }, { "docid": "ffc36fa0dcc81a7f5ba9751eee9094d7", "text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.", "title": "" } ]
[ { "docid": "c252cca4122984aac411a01ce28777f7", "text": "An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight with the camera mounted onboard the vehicle. The target considered consists of a finite set of stationary and disjoint points lying in a plane. Control of the position and orientation dynamics is decoupled using a visual error based on spherical centroid data, along with estimations of the linear velocity and the gravitational inertial direction extracted from image features and an embedded inertial measurement unit. The visual error used compensates for poor conditioning of the image Jacobian matrix by introducing a nonhomogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller, that ensures exponential convergence of the system considered, is derived for the full dynamics of the system using control Lyapunov function design techniques. Experimental results on a quadrotor UAV, developed by the French Atomic Energy Commission, demonstrate the robustness and performance of the proposed control strategy.", "title": "" }, { "docid": "6d570aabfbf4f692fc36a0ef5151a469", "text": "Background: Balance is a component of basic needs for daily activities and it plays an important role in static and dynamic activities. Core stabilization training is thought to improve balance, postural control, and reduce the risk of lower extremity injuries. The purpose of this study was to study the effect of core stabilizing program on balance in spastic diplegic cerebral palsy children. Subjects and Methods: Thirty diplegic cerebral palsy children from both sexes ranged in age from six to eight years participated in this study. They were assigned randomly into two groups of equal numbers, control group (A) children were received selective therapeutic exercises and study group (B) children were received selective therapeutic exercises plus core stabilizing program for eight weeks. Each patient of the two groups was evaluated before and after treatment by Biodex Balance System in laboratory of balance in faculty of physical therapy (antero posterior, medio lateral and overall stability). Patients in both groups received traditional physical therapy program for one hour per day and three sessions per week and group (B) were received core stabilizing program for eight weeks three times per week. Results: There was no significant difference between the two groups in all measured variables before wearing the orthosis (p>0.05), while there was significant difference when comparing pre and post mean values of all measured variables in each group (p<0.01). When comparing post mean values between both groups, the results revealed significant improvement in favor of group (B) (p<0.01). Conclusion: core stabilizing program is an effective therapeutic exercise to improve balance in diplegic cerebral palsy children.", "title": "" }, { "docid": "ffdd14d8d74a996971284a8e5e950996", "text": "Ten years on from a review in the twentieth issue of this journal, this contribution assess the direction research in the field of glucose sensing for diabetes is headed and various technologies to be seen in the future. The emphasis of this review was placed on the home blood glucose testing market. After an introduction to diabetes and glucose sensing, this review analyses state of the art and pipeline devices; in particular their user friendliness and technological advancement. This review complements conventional reviews based on scholarly published papers in journals.", "title": "" }, { "docid": "51e0caf419babd61615e1537545e40e8", "text": "Past research on automatic facial expression analysis has focused mostly on the recognition of prototypic expressions of discrete emotions rather than on the analysis of dynamic changes over time, although the importance of temporal dynamics of facial expressions for interpretation of the observed facial behavior has been acknowledged for over 20 years. For instance, it has been shown that the temporal dynamics of spontaneous and volitional smiles are fundamentally different from each other. In this work, we argue that the same holds for the temporal dynamics of brow actions and show that velocity, duration, and order of occurrence of brow actions are highly relevant parameters for distinguishing posed from spontaneous brow actions. The proposed system for discrimination between volitional and spontaneous brow actions is based on automatic detection of Action Units (AUs) and their temporal segments (onset, apex, offset) produced by movements of the eyebrows. For each temporal segment of an activated AU, we compute a number of mid-level feature parameters including the maximal intensity, duration, and order of occurrence. We use Gentle Boost to select the most important of these parameters. The selected parameters are used further to train Relevance Vector Machines to determine per temporal segment of an activated AU whether the action was displayed spontaneously or volitionally. Finally, a probabilistic decision function determines the class (spontaneous or posed) for the entire brow action. When tested on 189 samples taken from three different sets of spontaneous and volitional facial data, we attain a 90.7% correct recognition rate.", "title": "" }, { "docid": "a48193a735485fa2bca35897bae54208", "text": "Interest in and research on disgust has surged over the past few decades. The field, however, still lacks a coherent theoretical framework for understanding the evolved function or functions of disgust. Here we present such a framework, emphasizing 2 levels of analysis: that of evolved function and that of information processing. Although there is widespread agreement that disgust evolved to motivate the avoidance of contact with disease-causing organisms, there is no consensus about the functions disgust serves when evoked by acts unrelated to pathogen avoidance. Here we suggest that in addition to motivating pathogen avoidance, disgust evolved to regulate decisions in the domains of mate choice and morality. For each proposed evolved function, we posit distinct information processing systems that integrate function-relevant information and account for the trade-offs required of each disgust system. By refocusing the discussion of disgust on computational mechanisms, we recast prior theorizing on disgust into a framework that can generate new lines of empirical and theoretical inquiry.", "title": "" }, { "docid": "050ca96de473a83108b5ac26f4ac4349", "text": "The concept of graphene-based two-dimensional leaky-wave antenna (LWA), allowing both frequency tuning and beam steering in the terahertz band, is proposed in this paper. In its design, a graphene sheet is used as a tuning part of the high-impedance surface (HIS) that acts as the ground plane of such 2-D LWA. It is shown that, by adjusting the graphene conductivity, the reflection phase of the HIS can be altered effectively, thus controlling the resonant frequency of the 2-D LWA over a broad band. In addition, a flexible adjustment of its pointing direction can be achieved over a wide range, while keeping the operating frequency fixed. Transmission-line methods are used to accurately predict the antenna reconfigurable characteristics, which are further verified by means of commercial full-wave analysis tools.", "title": "" }, { "docid": "909405e3c06f22273107cb70a40d88c6", "text": "This paper reports a 6-bit 220-MS/s time-interleaving successive approximation register analog-to-digital converter (SAR ADC) for low-power low-cost CMOS integrated systems. The major concept of the design is based on the proposed set-and-down capacitor switching method in the DAC capacitor array. Compared to the conventional switching method, the average switching energy is reduced about 81%. At 220-MS/s sampling rate, the measured SNDR and SFDR are 32.62 dB and 48.96 dB respectively. The resultant ENOB is 5.13 bits. The total power consumption is 6.8 mW. Fabricated in TSMC 0.18-µm 1P5M Digital CMOS technology, the ADC only occupies 0.032 mm2 active area.", "title": "" }, { "docid": "cb88333d7c90df778361318dd362e9cb", "text": "1. All other texts on the mathematics of language are now obsolete. Therefore, instead of going on about what a wonderful job Partee, ter Meulen, and Wall (henceforth, PMW) have done in some ways (breadth of coverage, much better presentation of formal semantics than is usual in books on mathematics of language, etc.), I will leave the lily ungilded, and focus on some points where the book under review could be made far better than it actually is. 2. Perhaps my main complaint concerns the treatment of the connections between the mathematical methods and the linguistics. This whole question is dealt with rather unevenly, and this is reflected in the very structure of the book. The major topics covered, corresponding to the book's division into parts (which are then subdivided into chapters) are set theory, logic and formal systems, algebra, \"English as a formal language\" (this is the heading under which compositionality, lambda-abstraction, generalized quantifiers, and intensionality are discussed), and finally formal language and automata theory. Now, the \"English as a formal language\" part deals with a Montague-style treatment of this language, but it does not go into contemporary syntactic analyses of English, not even ones that are mathematically precise and firmly grounded in formal language theory. Having praised the book for its detailed discussion of the uses of formal semantics in linguistics, I must damn its cavalier treatment of the uses of formal syntax. Thus, there is no mention anywhere in it of generalized phrase structure grammar or X-bar syntax or almost anything else of relevance to modern syntactic theory. Likewise, although the section on set theory deals at some length with nondenumerable sets, there is no mention of the argument of Langendoen and Postal (1984) that NLs are not denumerable. Since this is perhaps the one place in the literature where set theory and linguistics meet, one does not have to be a fan of Langendoen and Postal to see that this topic should be broached. 3. Certain important theoretical topics, usually ones at the interface of mathematics and linguistics, are presented sketchily and even misleadingly; for example, the compositionality of formal semantics, the generative power of transformational grammar, the nonregularity and noncontext freeness of NLs, and (more generally) the question of what kinds of objects one can prove things about. Let us begin with the principle of compositionality (i.e., that \"the meaning of a complex expression is a function of the meanings of its parts and of the syntactic rules by which they are combined\"). PMW claim that \"construed broadly and vaguely", "title": "" }, { "docid": "2fa2ada108af6a24ae296723cec5ae14", "text": "We sought to determine if antenatal corticosteroid treatment administered prior to 24 weeks' gestation influences neonatal morbidity and mortality in extremely low-birth-weight infants. A retrospective review was performed of all singleton pregnancies treated with one complete course of antenatal corticosteroids prior to 24 weeks' gestation and delivered between 23(0)/(7) and 25(6)/(7) weeks. These infants were compared with similar gestational-age controls. There were no differences in gender, race, birth weight, and gestational age between the groups. Infants exposed to antenatal corticosteroids had lower mortality (29.3% versus 62.9%, P = 0.001) and grade 3 or 4 intraventricular hemorrhage (IVH; 16.7% versus 36%, P < 0.05; relative risk [RR]: 2.16). Grade 3 and 4 IVH was associated with significantly lower survival probability as compared with no IVH or grade 1 and 2 IVH (P < 0.001, RR: 10.6, 95% confidence interval [CI]: 4.4 to 25.6). Antenatal steroid exposure was associated with a 62% decrease in the hazard rate compare with those who did not receive antenatal steroids after adjusting for IVH grade (Cox proportional hazard model, hazard ratio 0.38, 95% CI: 0.152 to 0.957, P = 0.04). The rates of premature rupture of membranes and chorioamnionitis were higher for infants exposed to antenatal corticosteroids. Exposure to a single course of antenatal corticosteroids prior to 24 weeks' gestation was associated with reduction of the risk of severe IVH and neonatal mortality for extremely low-birth-weight infants.", "title": "" }, { "docid": "905ba98c5d0a3ec39e06e9a14caa9016", "text": "Dialogue topic tracking is a sequential labelling problem of recognizing the topic state at each time step in given dialogue sequences. This paper presents various artificial neural network models for dialogue topic tracking, including convolutional neural networks to account for semantics at each individual utterance, and recurrent neural networks to account for conversational contexts along multiple turns in the dialogue history. The experimental results demonstrate that our proposed models can significantly improve the tracking performances in human-human conversations.", "title": "" }, { "docid": "50dd728b4157aefb7df35366f5822d0d", "text": "This paper describes iDriver, an iPhone software to remote control “Spirit of Berlin”. “Spirit of Berlin” is a completely autonomous car developed by the Free University of Berlin which is capable of unmanned driving in urban areas. iDriver is an iPhone application sending control packets to the car in order to remote control its steering wheel, gas and brake pedal, gear shift and turn signals. Additionally, a video stream from two top-mounted cameras is broadcasted back to the iPhone.", "title": "" }, { "docid": "1453350c8134ecfe272255b71e7707ad", "text": "Program slicing is a viable method to restrict the focus of a task to specific sub-components of a program. Examples of applications include debugging, testing, program comprehension, restructuring, downsizing, and parallelization. This paper discusses different statement deletion based slicing methods, together with algorithms and applications to software engineering.", "title": "" }, { "docid": "9fb9664eea84d3bc0f59f7c4714debc1", "text": "International research has shown that users are complacent when it comes to smartphone security behaviour. This is contradictory, as users perceive data stored on the `smart' devices to be private and worth protecting. Traditionally less attention is paid to human factors compared to technical security controls (such as firewalls and antivirus), but there is a crucial need to analyse human aspects as technology alone cannot deliver complete security solutions. Increasing a user's knowledge can improve compliance with good security practices, but for trainers and educators to create meaningful security awareness materials they must have a thorough understanding of users' existing behaviours, misconceptions and general attitude towards smartphone security.", "title": "" }, { "docid": "a5255efa61de43a3341473facb4be170", "text": "Differentiation of 3T3-L1 preadipocytes can be induced by a 2-d treatment with a factor \"cocktail\" (DIM) containing the synthetic glucocorticoid dexamethasone (dex), insulin, the phosphodiesterase inhibitor methylisobutylxanthine (IBMX) and fetal bovine serum (FBS). We temporally uncoupled the activities of the four DIM components and found that treatment with dex for 48 h followed by IBMX treatment for 48 h was sufficient for adipogenesis, whereas treatment with IBMX followed by dex failed to induce significant differentiation. Similar results were obtained with C3H10T1/2 and primary mesenchymal stem cells. The 3T3-L1 adipocytes differentiated by sequential treatment with dex and IBMX displayed insulin sensitivity equivalent to DIM adipocytes, but had lower sensitivity to ISO-stimulated lipolysis and reduced triglyceride content. The nondifferentiating IBMX-then-dex treatment produced transient expression of adipogenic transcriptional regulatory factors C/EBPbeta and C/EBPdelta, and little induction of terminal differentiation factors C/EBPalpha and PPARgamma. Moreover, the adipogenesis inhibitor preadipocyte factor-1 (Pref-1) was repressed by DIM or by dex-then-IBMX, but not by IBMX-then-dex treatment. We conclude that glucocorticoids drive preadipocytes to a novel intermediate cellular state, the dex-primed preadipocyte, during adipogenesis in cell culture, and that Pref-1 repression may be a cell fate determinant in preadipocytes.", "title": "" }, { "docid": "67925645b590cba622dd101ed52cf9e2", "text": "This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from \"thin slices\" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used these excerpts to complete assessments of overall psychopathy and its Factor 1 and Factor 2 components, various personality disorders, violence proneness, and attractiveness. Thin-slice ratings of psychopathy correlated moderately and significantly with psychopathy criterion measures, especially those related to interpersonal features of psychopathy, particularly in the 5- and 10-s excerpt conditions and in the video and combined channel conditions. These findings demonstrate that first impressions of psychopathy and related constructs, particularly those pertaining to interpersonal functioning, can be reasonably reliable and valid. They also raise intriguing questions regarding how individuals form first impressions and about the extent to which first impressions may influence the assessment of personality disorders. (PsycINFO Database Record (c) 2009 APA, all rights reserved).", "title": "" }, { "docid": "39bc8559589f388bb6eca16a1b3b2e87", "text": "This paper presents a method to learn a decision tree to quantitatively explain the logic of each prediction of a pretrained convolutional neural networks (CNNs). Our method boosts the following two aspects of network interpretability. 1) In the CNN, each filter in a high conv-layer must represent a specific object part, instead of describing mixed patterns without clear meanings. 2) People can explain each specific prediction made by the CNN at the semantic level using a decision tree, i.e. which filters (or object parts) are used for prediction and how much they contribute in the prediction. To conduct such a quantitative explanation of a CNN, our method learns explicit representations of object parts in high conv-layers of the CNN and mines potential decision modes memorized in fully-connected layers. The decision tree organizes these potential decision modes in a coarse-to-fine manner. Experiments have demonstrated the effectiveness of the proposed method.", "title": "" }, { "docid": "00ac09dab67200f6b9df78a480d6dbd8", "text": "In this paper, a new three-phase current-fed push-pull DC-DC converter is proposed. This converter uses a high-frequency three-phase transformer that provides galvanic isolation between the power source and the load. The three active switches are connected to the same reference, which simplifies the gate drive circuitry. Reduction of the input current ripple and the output voltage ripple is achieved by means of an inductor and a capacitor, whose volumes are smaller than in equivalent single-phase topologies. The three-phase DC-DC conversion also helps in loss distribution, allowing the use of lower cost switches. These characteristics make this converter suitable for applications where low-voltage power sources are used and the associated currents are high, such as in fuel cells, photovoltaic arrays, and batteries. The theoretical analysis, a simplified design example, and the experimental results for a 1-kW prototype will be presented for two operation regions. The prototype was designed for a switching frequency of 40 kHz, an input voltage of 120 V, and an output voltage of 400 V.", "title": "" }, { "docid": "52e75a2e3d34c1cef5e61c69e074caf2", "text": "In this paper, we propose an efficient method for license plate localization in the images with various situations and complex background. At the first, in order to reduce problems such as low quality and low contrast in the vehicle images, image contrast is enhanced by the two different methods and the best for following is selected. At the second part, vertical edges of the enhanced image are extracted by sobel mask. Then the most of the noise and background edges are removed by an effective algorithm. The output of this stage is given to a morphological filtering to extract the candidate regions and finally we use several geometrical features such as area of the regions, aspect ratio and edge density to eliminate the non-plate regions and segment the plate from the input car image. This method is performed on some real images that have been captured at the different imaging conditions. The appropriate experimental results show that our proposed method is nearly independent to environmental conditions such as lightening, camera angles and camera distance from the automobile, and license plate rotation.", "title": "" }, { "docid": "7bf3adb52e9f2c40d419872f82429a06", "text": "OBJECTIVES\nWe examine recent published research on the extraction of information from textual documents in the Electronic Health Record (EHR).\n\n\nMETHODS\nLiterature review of the research published after 1995, based on PubMed, conference proceedings, and the ACM Digital Library, as well as on relevant publications referenced in papers already included.\n\n\nRESULTS\n174 publications were selected and are discussed in this review in terms of methods used, pre-processing of textual documents, contextual features detection and analysis, extraction of information in general, extraction of codes and of information for decision-support and enrichment of the EHR, information extraction for surveillance, research, automated terminology management, and data mining, and de-identification of clinical text.\n\n\nCONCLUSIONS\nPerformance of information extraction systems with clinical text has improved since the last systematic review in 1995, but they are still rarely applied outside of the laboratory they have been developed in. Competitive challenges for information extraction from clinical text, along with the availability of annotated clinical text corpora, and further improvements in system performance are important factors to stimulate advances in this field and to increase the acceptance and usage of these systems in concrete clinical and biomedical research contexts.", "title": "" } ]
scidocsrr
c6b856db07d45a093186b5c5a651d2b1
BUILDING INFORMATION MODELLING FOR CULTURAL HERITAGE : A REVIEW
[ { "docid": "47cf10951d13e1da241a5551217aa2d5", "text": "Despite the widespread adoption of building information modelling (BIM) for the design and lifecycle management of new buildings, very little research has been undertaken to explore the value of BIM in the management of heritage buildings and cultural landscapes. To that end, we are investigating the construction of BIMs that incorporate both quantitative assets (intelligent objects, performance data) and qualitative assets (historic photographs, oral histories, music). Further, our models leverage the capabilities of BIM software to provide a navigable timeline that chronicles tangible and intangible changes in the past and projections into the future. In this paper, we discuss three projects undertaken by the authors that explore an expanded role for BIM in the documentation and conservation of architectural heritage. The projects range in scale and complexity and include: a cluster of three, 19th century heritage buildings in the urban core of Toronto, Canada; a 600 hectare village in rural, south-eastern Ontario with significant modern heritage value, and a proposed web-centered BIM database for materials and methods of construction specific to heritage conservation.", "title": "" } ]
[ { "docid": "a922051835f239db76be1dbb8edead3e", "text": "Among the simplest and most intuitively appealing classes of nonprobabilistic classification procedures are those that weight the evidence of nearby sample observations most heavily. More specifically, one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the evidence of another neighbor which is at a greater distance from the unclassified observation. One such classification rule is described which makes use of a neighbor weighting function for the purpose of assigning a class to an unclassified sample. The admissibility of such a rule is also considered.", "title": "" }, { "docid": "959a43b6b851a4a255466296efac7299", "text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.", "title": "" }, { "docid": "3df12301c628a4b1fc9421c80b79b42b", "text": "Cellular processes can only be understood as the dynamic interplay of molecules. There is a need for techniques to monitor interactions of endogenous proteins directly in individual cells and tissues to reveal the cellular and molecular architecture and its responses to perturbations. Here we report our adaptation of the recently developed proximity ligation method to examine the subcellular localization of protein-protein interactions at single-molecule resolution. Proximity probes—oligonucleotides attached to antibodies against the two target proteins—guided the formation of circular DNA strands when bound in close proximity. The DNA circles in turn served as templates for localized rolling-circle amplification (RCA), allowing individual interacting pairs of protein molecules to be visualized and counted in human cell lines and clinical specimens. We used this method to show specific regulation of protein-protein interactions between endogenous Myc and Max oncogenic transcription factors in response to interferon-γ (IFN-γ) signaling and low-molecular-weight inhibitors.", "title": "" }, { "docid": "993d7ee2498f7b19ae70850026c0a0c4", "text": "We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish.", "title": "" }, { "docid": "b15078182915859c3eab4b174115cd0f", "text": "We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language.", "title": "" }, { "docid": "0277fd19009088f84ce9f94a7e942bc1", "text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.", "title": "" }, { "docid": "458e4b5196805b608e15ee9c566123c9", "text": "For the first half century of animal virology, the major problem was lack of a simple method for quantitating infectious virus particles; the only method available at that time was some form or other of the serial-dilution end-point method in animals, all of which were both slow and expensive. Cloned cultured animal cells, which began to be available around 1950, provided Dulbecco with a new approach. He adapted the technique developed by Emory Ellis and Max Delbrück for assaying bacteriophage, that is, seeding serial dilutions of a given virus population onto a confluent lawn of host cells, to the measurement of Western equine encephalitis virus, and demonstrated that it also formed easily countable plaques in monolayers of chick embryo fibroblasts. The impact of this finding was enormous; animal virologists had been waiting for such a technique for decades. It was immediately found to be widely applicable to many types of cells and most viruses, gained quick acceptance, and is widely regarded as marking the beginning of molecular animal virology. Renato Dulbecco was awarded the Nobel Prize in 1975. W. K. JOKLIK", "title": "" }, { "docid": "e011ab57139a9a2f6dc13033b0ab6223", "text": "Over the last few years, virtual reality (VR) has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate immersive experiences at low cost. One of the remaining challenges to further improve visual comfort in VR experiences is the vergence-accommodation conflict inherent to all stereoscopic displays. Accurate reproduction of all depth cues is crucial for visual comfort. By combining well-known stereoscopic display principles with emerging factored light field technology, we present the first wearable VR display supporting high image resolution as well as focus cues. A light field is presented to each eye, which provides more natural viewing experiences than conventional near-eye displays. Since the eye box is just slightly larger than the pupil size, rank-1 light field factorizations are sufficient to produce correct or nearly-correct focus cues; no time-multiplexed image display or gaze tracking is required. We analyze lens distortions in 4D light field space and correct them using the afforded high-dimensional image formation. We also demonstrate significant improvements in resolution and retinal blur quality over related near-eye displays. Finally, we analyze diffraction limits of these types of displays.", "title": "" }, { "docid": "3aab2226cfdee4c6446090922fdd4f2d", "text": "Information system and data mining are important resources for the investors to make decisions. Information theory pointed that the information is increasing all the time, when the corporations build their millions of databases in order to improve the efficiency. Database technology caters to the needs of fully developing the information resources. This essay discusses the problem of decision making support system and the application of business data mining in commercial decision making. It is recommended that the intelligent decision support system should be built. Besides, the business information used in the commercial decision making must follow the framework of a whole system under guideline, which should be designed by the company.", "title": "" }, { "docid": "cd08ec6c25394b3304368952cf4fb99b", "text": "Recently, several experimental studies have been conducted on block data layout as a data transformation technique used in conjunction with tiling to improve cache performance. In this paper, we provide a theoretical analysis for the TLB and cache performance of block data layout. For standard matrix access patterns, we derive an asymptotic lower bound on the number of TLB misses for any data layout and show that block data layout achieves this bound. We show that block data layout improves TLB misses by a factor of O B compared with conventional data layouts, where B is the block size of block data layout. This reduction contributes to the improvement in memory hierarchy performance. Using our TLB and cache analysis, we also discuss the impact of block size on the overall memory hierarchy performance. These results are validated through simulations and experiments on state-of-the-art platforms.", "title": "" }, { "docid": "1caaac35c25cd9efb729b57e59c41be5", "text": "The design of elastic file synchronization services like Dropbox is an open and complex issue yet not unveiled by the major commercial providers, as it includes challenges like fine-grained programmable elasticity and efficient change notification to millions of devices. In this paper, we propose a novel architecture for file synchronization which aims to solve the above two major challenges. At the heart of our proposal lies ObjectMQ, a lightweight framework for providing programmatic elasticity to distributed objects using messaging. The efficient use of indirect communication: i) enables programmatic elasticity based on queue message processing, ii) simplifies change notifications offering simple unicast and multicast primitives; and iii) provides transparent load balancing based on queues.\n Our reference implementation is StackSync, an open source elastic file synchronization Cloud service developed in the context of the FP7 project CloudSpaces. StackSync supports both predictive and reactive provisioning policies on top of ObjectMQ that adapt to real traces from the Ubuntu One service. The feasibility of our approach has been extensively validated with an open benchmark, including commercial synchronization services like Dropbox or OneDrive.", "title": "" }, { "docid": "0bc1c637d6f4334dd8a27491ebde40d6", "text": "Osteoarthritis of the hip describes a clinical syndrome of joint pain accompanied by varying degrees of functional limitation and reduced quality of life. Osteoarthritis may not be progressive and most patients will not need surgery, with their symptoms adequately controlled by non-surgical measures. The treatment of hip osteoarthritis is aimed at reducing pain and stiffness and improving joint mobility. Total hip replacement remains the most effective treatment option but it is a major surgery with potential serious complications. NICE guideline has suggested a holistic approach to management of hip osteoarthritis which includes both nonpharmacological and pharmacological treatments. The non-pharmacological treatments range from education ,physical therapy and behavioral changes ,walking aids .The ESCAPE( Enabling Self-Management and Coping of Arthritic Pain Through Exercise) rehabilitation programme for hip and knee osteoarthritis which integrates simple education, self-management and coping strategies, with an exercise regimen has shown to be more cost-effective than usual care. There is a choice of reviewed pharmacological treatments available, but there are few current reviews of possible nonpharmacological methods. This review will focus on the non-pharmacological and non-surgical methods.", "title": "" }, { "docid": "51f5ba274068c0c03e5126bda056ba98", "text": "Electricity is conceivably the most multipurpose energy carrier in modern global economy, and therefore primarily linked to human and economic development. Energy sector reform is critical to sustainable energy development and includes reviewing and reforming subsidies, establishing credible regulatory frameworks, developing policy environments through regulatory interventions, and creating marketbased approaches. Energy security has recently become an important policy driver and privatization of the electricity sector has secured energy supply and provided cheaper energy services in some countries in the short term, but has led to contrary effects elsewhere due to increasing competition, resulting in deferred investments in plant and infrastructure due to longer-term uncertainties. On the other hand global dependence on fossil fuels has led to the release of over 1100 GtCO2 into the atmosphere since the mid-19th century. Currently, energy-related GHG emissions, mainly from fossil fuel combustion for heat supply, electricity generation and transport, account for around 70% of total emissions including carbon dioxide, methane and some traces of nitrous oxide. This multitude of aspects play a role in societal debate in comparing electricity generating and supply options, such as cost, GHG emissions, radiological and toxicological exposure, occupational health and safety, employment, domestic energy security, and social impressions. Energy systems engineering provides a methodological scientific framework to arrive at realistic integrated solutions to complex energy problems, by adopting a holistic, systems-based approach, especially at decision making and planning stage. Modeling and optimization found widespread applications in the study of physical and chemical systems, production planning and scheduling systems, location and transportation problems, resource allocation in financial systems, and engineering design. This article reviews the literature on power and supply sector developments and analyzes the role of modeling and optimization in this sector as well as the future prospective of optimization modeling as a tool for sustainable energy systems. © 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "486978346e7a77f66e3ccce6f07fb346", "text": "In this paper, we present a novel structure, Semi-AutoEncoder, based on AutoEncoder. We generalize it into a hybrid collaborative filtering model for rating prediction as well as personalized top-n recommendations. Experimental results on two real-world datasets demonstrate its state-of-the-art performances.", "title": "" }, { "docid": "16a6c26d6e185be8383c062c6aa620f8", "text": "In this research, we suggested a vision-based traffic accident detection system for automatically detecting, recording, and reporting traffic accidents at intersections. This model first extracts the vehicles from the video image of CCD camera, tracks the moving vehicles, and extracts features such as the variation rate of the velocity, position, area, and direction of moving vehicles. The model then makes decisions on the traffic accident based on the extracted features. And we suggested and designed the metadata registry for the system to improve the interoperability. In the field test, 4 traffic accidents were detected and recorded by the system. The video clips are invaluable for intersection safety analysis.", "title": "" }, { "docid": "1fc468d42d432f716b3518dbba268db5", "text": "In this paper a fast sweeping method for computing the numerical solution of Eikonal equations on a rectangular grid is presented. The method is an iterative method which uses upwind difference for discretization and uses Gauss-Seidel iterations with alternating sweeping ordering to solve the discretized system. The crucial idea is that each sweeping ordering follows a family of characteristics of the corresponding Eikonal equation in a certain direction simultaneously. The method has an optimal complexity of O(N) for N grid points and is extremely simple to implement in any number of dimensions. Monotonicity and stability properties of the fast sweeping algorithm are proven. Convergence and error estimates of the algorithm for computing the distance function is studied in detail. It is shown that 2n Gauss-Seidel iterations is enough for the distance function in n dimensions. An estimation of the number of iterations for general Eikonal equations is also studied. Numerical examples are used to verify the analysis.", "title": "" }, { "docid": "5744e87741b6154b333e0f24bb17f0ea", "text": "We describe two new related resources that facilitate modelling of general knowledge reasoning in 4th grade science exams. The first is a collection of curated facts in the form of tables, and the second is a large set of crowd-sourced multiple-choice questions covering the facts in the tables. Through the setup of the crowd-sourced annotation task we obtain implicit alignment information between questions and tables. We envisage that the resources will be useful not only to researchers working on question answering, but also to people investigating a diverse range of other applications such as information extraction, question parsing, answer type identification, and lexical semantic modelling.", "title": "" }, { "docid": "7e6a3a04c24a0fc24012619d60ebb87b", "text": "The recent trend toward democratization in countries throughout the globe has challenged scholars to pursue two potentially contradictory goals: to develop a differentiated conceptualization of democracy that captures the diverse experiences of these countries; and to extend the analysis to this broad range of cases without ‘stretching’ the concept. This paper argues that this dual challenge has led to a proliferation of conceptual innovations, including hundreds of subtypes of democracy—i.e., democracy ‘with adjectives.’ The paper explores the strengths and weaknesses of three important strategies of innovation that have emerged: ‘precising’ the definition of democracy; shifting the overarching concept with which democracy is associated; and generating various forms of subtypes. Given the complex structure of meaning produced by these strategies for refining the concept of democracy, we conclude by offering an old piece of advice with renewed urgency: It is imperative that scholars situate themselves in relation to this structure of meaning by clearly defining and explicating the conception of democracy they are employing.", "title": "" }, { "docid": "5ea5650e03be82a600159c2095c387b6", "text": "The medicinal plants are widely used by the traditional medicinal practitioners for curing various diseases in their day to day practice. In traditional system of medicine, different parts (leaves, stem, flower, root, seeds and even whole plant) of Ocimum sanctum Linn. have been recommended for the treatment of bronchitis, malaria, diarrhea, dysentery, skin disease, arthritis, eye diseases, insect bites and so on. The O. sanctum L. has also been suggested to possess anti-fertility, anticancer, antidiabetic, antifungal, antimicrobial, cardioprotective, analgesic, antispasmodic and adaptogenic actions. Eugenol (1-hydroxy-2-methoxy-4-allylbenzene), the active constituents present in O. sanctum L. have been found to be largely responsible for the therapeutic potentials. The pharmacological studies reported in the present review confirm the therapeutic value of O. sanctum L. The results of the above studies support the use of this plant for human and animal disease therapy and reinforce the importance of the ethno-botanical approach as a potential source of bioactive substances.", "title": "" }, { "docid": "1830c839960f8ce9b26c906cc21e2a39", "text": "This comparative review highlights the relationships between the disciplines of bloodstain pattern analysis (BPA) in forensics and that of fluid dynamics (FD) in the physical sciences. In both the BPA and FD communities, scientists study the motion and phase change of a liquid in contact with air, or with other liquids or solids. Five aspects of BPA related to FD are discussed: the physical forces driving the motion of blood as a fluid; the generation of the drops; their flight in the air; their impact on solid or liquid surfaces; and the production of stains. For each of these topics, the relevant literature from the BPA community and from the FD community is reviewed. Comments are provided on opportunities for joint BPA and FD research, and on the development of novel FD-based tools and methods for BPA. Also, the use of dimensionless numbers is proposed to inform BPA analyses.", "title": "" } ]
scidocsrr
76ed18681f0b79466975597be0c2545e
Cannabinoid signaling and liver therapeutics.
[ { "docid": "7e2bbd260e58d84a4be8b721cdf51244", "text": "Obesity is characterised by altered gut microbiota, low-grade inflammation and increased endocannabinoid (eCB) system tone; however, a clear connection between gut microbiota and eCB signalling has yet to be confirmed. Here, we report that gut microbiota modulate the intestinal eCB system tone, which in turn regulates gut permeability and plasma lipopolysaccharide (LPS) levels. The impact of the increased plasma LPS levels and eCB system tone found in obesity on adipose tissue metabolism (e.g. differentiation and lipogenesis) remains unknown. By interfering with the eCB system using CB(1) agonist and antagonist in lean and obese mouse models, we found that the eCB system controls gut permeability and adipogenesis. We also show that LPS acts as a master switch to control adipose tissue metabolism both in vivo and ex vivo by blocking cannabinoid-driven adipogenesis. These data indicate that gut microbiota determine adipose tissue physiology through LPS-eCB system regulatory loops and may have critical functions in adipose tissue plasticity during obesity.", "title": "" } ]
[ { "docid": "b357803105e6558f32061bdef0b0d6c3", "text": "We present a modular controller for quadruped locomotion over unperceived rough terrain. Our approach is based on a computational Central Pattern Generator (CPG) model implemented as coupled nonlinear oscillators. Stumbling correction reflex is implemented as a sensory feedback mechanism affecting the CPG. We augment the outputs of the CPG with virtual model control torques responsible for posture control. The control strategy is validated on a 3D forward dynamics simulated quadruped robot platform of about the size and weight of a cat. To demonstrate the capabilities of the proposed approach, we perform locomotion over unperceived uneven terrain and slopes, as well as situations facing external pushes.", "title": "" }, { "docid": "2bb39c3428116cef1f60cd1c5d36613e", "text": "Digital video signal is widely used in modern society. There is increasing demand for it to be more secure and highly reliable. Focusing on this, we propose a method of detecting mosaic blocks. Our proposed method combines two algorithms: HOG with SVM classifier and template matching. We also consider characteristics of mosaic blocks other than shape. Experimental results show that our proposed method has high detection performance of mosaic blocks.", "title": "" }, { "docid": "e498e5f0b1174e465dbef8747545f5a7", "text": "We propose a novel architecture for k-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our model, a part of the encoder output is interpreted as a confidence region estimate about the embedding point, and expressed as a Gaussian covariance matrix. Our network then constructs a direction and class dependent distance metric on the embedding space, using uncertainties of individual data points as weights. We show that Gaussian prototypical networks are a preferred architecture over vanilla prototypical networks with an equivalent number of parameters. We report state-ofthe-art performance in 1-shot and 5-shot classification both in 5-way and 20-way regime (for 5-shot 5-way, we are comparable to previous state-of-the-art) on the Omniglot dataset. We explore artificially down-sampling a fraction of images in the training set, which improves our performance even further. We therefore hypothesize that Gaussian prototypical networks might perform better in less homogeneous, noisier datasets, which are commonplace in real world applications.", "title": "" }, { "docid": "96a79bc015e34db18e32a31bfaaace36", "text": "We consider social media as a promising tool for public health, focusing on the use of Twitter posts to build predictive models about the forthcoming influence of childbirth on the behavior and mood of new mothers. Using Twitter posts, we quantify postpartum changes in 376 mothers along dimensions of social engagement, emotion, social network, and linguistic style. We then construct statistical models from a training set of observations of these measures before and after the reported childbirth, to forecast significant postpartum changes in mothers. The predictive models can classify mothers who will change significantly following childbirth with an accuracy of 71%, using observations about their prenatal behavior, and as accurately as 80-83% when additionally leveraging the initial 2-3 weeks of postnatal data. The study is motivated by the opportunity to use social media to identify mothers at risk of postpartum depression, an underreported health concern among large populations, and to inform the design of low-cost, privacy-sensitive early-warning systems and intervention programs aimed at promoting wellness postpartum.", "title": "" }, { "docid": "4163070f45dd4d252a21506b1abcfff4", "text": "Nowadays, security solutions are mainly focused on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In fact, requirements engineering often neglects enough attention to security concerns. In this paper it will be presented a case study of our proposal, called SREP (Security Requirements Engineering Process), which is a standard-centred process and a reuse-based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle. In brief, a case study is shown in this paper demonstrating how the security requirements for a security critical IS can be obtained in a guided and systematic way by applying SREP.", "title": "" }, { "docid": "676540e4b0ce65a71e86bf346f639f22", "text": "Methylation is a prevalent posttranscriptional modification of RNAs. However, whether mammalian microRNAs are methylated is unknown. Here, we show that the tRNA methyltransferase NSun2 methylates primary (pri-miR-125b), precursor (pre-miR-125b), and mature microRNA 125b (miR-125b) in vitro and in vivo. Methylation by NSun2 inhibits the processing of pri-miR-125b2 into pre-miR-125b2, decreases the cleavage of pre-miR-125b2 into miR-125, and attenuates the recruitment of RISC by miR-125, thereby repressing the function of miR-125b in silencing gene expression. Our results highlight the impact of miR-125b function via methylation by NSun2.", "title": "" }, { "docid": "d984489b4b71eabe39ed79fac9cf27a1", "text": "Remote sensing from airborne and spaceborne platforms provides valuable data for mapping, environmental monitoring, disaster management and civil and military intelligence. However, to explore the full value of these data, the appropriate information has to be extracted and presented in standard format to import it into geo-information systems and thus allow efficient decision processes. The object-oriented approach can contribute to powerful automatic and semiautomatic analysis for most remote sensing applications. Synergetic use to pixel-based or statistical signal processing methods explores the rich information contents. Here, we explain principal strategies of object-oriented analysis, discuss how the combination with fuzzy methods allows implementing expert knowledge and describe a representative example for the proposed workflow from remote sensing imagery to GIS. The strategies are demonstrated using the first objectoriented image analysis software on the market, eCognition, which provides an appropriate link between remote sensing", "title": "" }, { "docid": "05d8383eb6b1c6434f75849859c35fd0", "text": "This paper proposes a robust approach for image based floor detection and segmentation from sequence of images or video. In contrast to many previous approaches, which uses a priori knowledge of the surroundings, our method uses combination of modified sparse optical flow and planar homography for ground plane detection which is then combined with graph based segmentation for extraction of floor from images. We also propose a probabilistic framework which makes our method adaptive to the changes in the surroundings. We tested our algorithm on several common indoor environment scenarios and were able to extract floor even under challenging circumstances. We obtained extremely satisfactory results in various practical scenarios such as where the floor and non floor areas are of same color, in presence of textured flooring, and where illumination changes are steep.", "title": "" }, { "docid": "679759d8f8e4c4ef5a2bb1356a61d7f5", "text": "This paper describes a method of implementing two factor authentication using mobile phones. The proposed method guarantees that authenticating to services, such as online banking or ATM machines, is done in a very secure manner. The proposed system involves using a mobile phone as a software token for One Time Password generation. The generated One Time Password is valid for only a short user-defined period of time and is generated by factors that are unique to both, the user and the mobile device itself. Additionally, an SMS-based mechanism is implemented as both a backup mechanism for retrieving the password and as a possible mean of synchronization. The proposed method has been implemented and tested. Initial results show the success of the proposed method.", "title": "" }, { "docid": "c741867c7d29026da910c52be073942d", "text": "In this report we summarize the results of the SemEval 2016 Task 8: Meaning Representation Parsing. Participants were asked to generate Abstract Meaning Representation (AMR) (Banarescu et al., 2013) graphs for a set of English sentences in the news and discussion forum domains. Eleven sites submitted valid systems. The availability of state-of-the-art baseline systems was a key factor in lowering the bar to entry; many submissions relied on CAMR (Wang et al., 2015b; Wang et al., 2015a) as a baseline system and added extensions to it to improve scores. The evaluation set was quite difficult to parse, particularly due to creative approaches to word representation in the web forum portion. The top scoring systems scored 0.62 F1 according to the Smatch (Cai and Knight, 2013) evaluation heuristic. We show some sample sentences along with a comparison of system parses and perform quantitative ablative studies.", "title": "" }, { "docid": "429c900f6ac66bcea5aa068d27f5b99f", "text": "Recent researches shows that Brain Computer Interface (BCI) technology provides effective way of communication between human and physical device. In this work, an EEG based wireless mobile robot is implemented for people suffer from motor disabilities can interact with physical devices based on Brain Computer Interface (BCI). An experimental model of mobile robot is explored and it can be controlled by human eye blink strength. EEG signals are acquired from NeuroSky Mind wave Sensor (single channel prototype) in non-invasive manner and Signal features are extracted by adopting Discrete Wavelet Transform (DWT) to amend the signal resolution. We analyze and compare the db4 and db7 wavelets for accurate classification of blink signals. Different classes of movements are achieved based on different blink strength of user. The experimental setup of adaptive human machine interface system provides better accuracy and navigates the mobile robot based on user command, so it can be adaptable for disabled people.", "title": "" }, { "docid": "4839938502248899c8adc9b6ef359c52", "text": "This paper introduces an overview and positioning of the contemporary brand experience in the digital context. With technological advances in games, gamification and emerging technologies, such as Virtual Reality (VR) and Artificial Intelligence (AI), it is possible that brand experiences are getting more pervasive and seamless. In this paper, we review the current theories around multi-sensory brand experience and the role of new technologies in the whole consumer journey, including pre-purchase, purchase and post-purchase stages. After this analysis, we introduce a conceptual framework that promotes a continuous loop of consumer experience and engagement from different and new touch points, which could be augmented by games, gamification and emerging technologies. Based on the framework, we conclude this paper with propositions, examples and recommendations for future research in contemporary brand management, which could help brand managers and designers to deal with technological challenges posed by the contemporary society.", "title": "" }, { "docid": "0cc25de8ea70fe1fd85824e8f3155bf7", "text": "When integrating information from multiple websites, the same data objects can exist in inconsistent text formats across sites, making it difficult to identify matching objects using exact text match. We have developed an object identification system called Active Atlas, which compares the objects’ shared attributes in order to identify matching objects. Certain attributes are more important for deciding if a mapping should exist between two objects. Previous methods of object identification have required manual construction of object identification rules or mapping rules for determining the mappings between objects. This manual process is time consuming and error-prone. In our approach, Active Atlas learns to tailor mapping rules, through limited user input, to a specific application domain. The experimental results demonstrate that we achieve higher accuracy and require less user involvement than previous methods across various application domains.", "title": "" }, { "docid": "e9768df1b2a679e7d9e81588d4c2af02", "text": "Over the last few decades, the electric utilities have seen a very significant increase in the application of metal oxide surge arresters on transmission lines in an effort to reduce lightning initiated flashovers, maintain high power quality and to avoid damages and disturbances especially in areas with high soil resistivity and lightning ground flash density. For economical insulation coordination in transmission and substation equipment, it is necessary to predict accurately the lightning surge overvoltages that occur on an electric power system.", "title": "" }, { "docid": "238aac56366875b1714284d3d963fe9b", "text": "We construct a general-purpose multi-input functional encryption scheme in the private-key setting. Namely, we construct a scheme where a functional key corresponding to a function f enables a user holding encryptions of $$x_1, \\ldots , x_t$$ x1,…,xt to compute $$f(x_1, \\ldots , x_t)$$ f(x1,…,xt) but nothing else. This is achieved starting from any general-purpose private-key single-input scheme (without any additional assumptions) and is proven to be adaptively secure for any constant number of inputs t. Moreover, it can be extended to a super-constant number of inputs assuming that the underlying single-input scheme is sub-exponentially secure. Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security assumptions and functionality. Previous and concurrent constructions of multi-input functional encryption schemes either rely on stronger assumptions and provided weaker security guarantees (Goldwasser et al. in Advances in cryptology—EUROCRYPT, 2014; Ananth and Jain in Advances in cryptology—CRYPTO, 2015), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al. in Advances in cryptology—EUROCRYPT, 2015). In comparison, we present a general transformation that simultaneously relies on weaker assumptions and guarantees stronger security.", "title": "" }, { "docid": "d4aaea0107cbebd7896f4cb57fa39c05", "text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs", "title": "" }, { "docid": "6e2d7dae0891a2f3a8f02fdb81af9dc6", "text": "Wireless Sensor Networks (WSNs) are charac-terized by multi-hop wireless connectivity, frequently changing network topology and need for efficient routing protocols. The purpose of this paper is to evaluate performance of routing protocol DSDV in wireless sensor network (WSN) scales regarding the End-to-End delay and throughput PDR with mobility factor .Routing protocols are a critical aspect to performance in mobile wireless networks and play crucial role in determining network performance in terms of packet delivery fraction, end-to-end delay and packet loss. Destination-sequenced distance vector (DSDV) protocol is a proactive protocol depending on routing tables which are maintained at each node. The routing protocol should detect and maintain optimal route(s) between source and destination nodes. In this paper, we present application of DSDV in WSN as extend to our pervious study to the design and impleme-ntation the details of the DSDV routing protocol in MANET using the ns-2 network simulator.", "title": "" }, { "docid": "b89259a915856b309a02e6e7aa6c957f", "text": "The paper proposes a comprehensive information security maturity model (ISMM) that addresses both technical and socio/non-technical security aspects. The model is intended for securing e-government services (implementation and service delivery) in an emerging and increasing security risk environment. The paper utilizes extensive literature review and survey study approaches. A total of eight existing ISMMs were selected and critically analyzed. Models were then categorized into security awareness, evaluation and management orientations. Based on the model’s strengths – three models were selected to undergo further analyses and then synthesized. Each of the three selected models was either from the security awareness, evaluation or management orientations category. To affirm the findings – a survey study was conducted into six government organizations located in Tanzania. The study was structured to a large extent by the security controls adopted from the Security By Consensus (SBC) model. Finally, an ISMM with five critical maturity levels was proposed. The maturity levels were: undefined, defined, managed, controlled and optimized. The papers main contribution is the proposed model that addresses both technical and non-technical security services within the critical maturity levels. Additionally, the paper enhances awareness and understanding on the needs for security in e-government services to stakeholders.", "title": "" }, { "docid": "a5cd94446abfc46c6d5c4e4e376f1e0a", "text": "Commitment problem in credit market and its eãects on economic growth are discussed. Completions of investment projects increase capital stock of the economy. These projects require credits which are ånanced by ånacial intermediaries. A simpliåed credit model of Dewatripont and Maskin is used to describe the ånancing process, in which the commitment problem or the \\soft budget constraint\" problem arises. However, in dynamic general equilibrium setup with endougenous determination of value and cost of projects, there arise multiple equilibria in the project ånancing model, namely reånancing equilirium and no-reånancing equilibrium. The former leads the economy to the stationary state with smaller capital stock level than the latter. Both the elimination of reånancing equilibrium and the possibility of \\Animal Spirits Cycles\" equilibrium are also discussed.", "title": "" }, { "docid": "fa51c71a66a8348dae241272a71b27e2", "text": "Achieving balance between convergence and diversity is a key issue in evolutionary multiobjective optimization. Most existing methodologies, which have demonstrated their niche on various practical problems involving two and three objectives, face significant challenges in many-objective optimization. This paper suggests a unified paradigm, which combines dominance- and decomposition-based approaches, for many-objective optimization. Our major purpose is to exploit the merits of both dominance- and decomposition-based approaches to balance the convergence and diversity of the evolutionary process. The performance of our proposed method is validated and compared with four state-of-the-art algorithms on a number of unconstrained benchmark problems with up to 15 objectives. Empirical results fully demonstrate the superiority of our proposed method on all considered test instances. In addition, we extend this method to solve constrained problems having a large number of objectives. Compared to two other recently proposed constrained optimizers, our proposed method shows highly competitive performance on all the constrained optimization problems.", "title": "" } ]
scidocsrr
4bdd8803192ea4cb8b47adefd6e45054
On-Line Mobile Robot Model Identification Using Integrated Perturbative Dynamics
[ { "docid": "14827ea435d82e4bfe481713af45afed", "text": "This paper introduces a model-based approach to estimating longitudinal wheel slip and detecting immobilized conditions of autonomous mobile robots operating on outdoor terrain. A novel tire traction/braking model is presented and used to calculate vehicle dynamic forces in an extended Kalman filter framework. Estimates of external forces and robot velocity are derived using measurements from wheel encoders, inertial measurement unit, and GPS. Weak constraints are used to constrain the evolution of the resistive force estimate based upon physical reasoning. Experimental results show the technique accurately and rapidly detects robot immobilization conditions while providing estimates of the robot's velocity during normal driving. Immobilization detection is shown to be robust to uncertainty in tire model parameters. Accurate immobilization detection is demonstrated in the absence of GPS, indicating the algorithm is applicable for both terrestrial applications and space robotics.", "title": "" } ]
[ { "docid": "c5b8c7fa8518595196aa48740578cb05", "text": "Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5× less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.", "title": "" }, { "docid": "633cce3860a44e5931d93dc3e83f14f4", "text": "The main theme of this paper is to present a new digital-controlled technique for battery charger to achieve constant current and voltage control while not requiring current feedback. The basic idea is to achieve constant current charging control by limiting the duty cycle of charger. Therefore, the current feedback signal is not required and thereby reducing the cost of A/D converter, current sensor, and computation complexity required for current control. Moreover, when the battery voltage is increased to the preset voltage level using constant current charge, the charger changes the control mode to constant voltage charge. A digital-controlled charger is designed and implemented for uninterrupted power supply (UPS) applications. The charger control is based upon the proposed control method in software. As a result, the UPS control, including boost converter, charger, and inverter control can be realized using only one low cost MCU. Experimental results demonstrate that the effectiveness of the design and implementation.", "title": "" }, { "docid": "95f9547a510ca82b283c59560b5a93c6", "text": "Human action recognition in videos is one of the most challenging tasks in computer vision. One important issue is how to design discriminative features for representing spatial context and temporal dynamics. Here, we introduce a path signature feature to encode information from intra-frame and inter-frame contexts. A key step towards leveraging this feature is to construct the proper trajectories (paths) for the data steam. In each frame, the correlated constraints of human joints are treated as small paths, then the spatial path signature features are extracted from them. In video data, the evolution of these spatial features over time can also be regarded as paths from which the temporal path signature features are extracted. Eventually, all these features are concatenated to constitute the input vector of a fully connected neural network for action classification. Experimental results on four standard benchmark action datasets, J-HMDB, SBU Dataset, Berkeley MHAD, and NTURGB+D demonstrate that the proposed approach achieves state-of-the-art accuracy even in comparison with recent deep learning based models.", "title": "" }, { "docid": "a212a2969c0c72894dcde880bbf29fa7", "text": "Machine learning is useful for building robust learning models, and it is based on a set of features that identify a state of an object. Unfortunately, some data sets may contain a large number of features making, in some cases, the learning process time consuming and the generalization capability of machine learning poor. To make a data set easy to learn and understand, it is typically recommended to remove the most irrelevant features from the set. However, choosing what data should be kept or eliminated may be performed by complex selection algorithms, and optimal feature selection may require an exhaustive search of all possible subsets of features which is computationally expensive. This paper proposes a simple method to perform feature selection using artificial neural networks. It is shown experimentally that genetic algorithms in combination with artificial neural networks can easily be used to extract those features that are required to produce a desired result. Experimental results show that very few hidden neurons are required for feature selection as artificial neural networks are only used to assess the quality of an individual, which is a chosen subset of features.", "title": "" }, { "docid": "1f2832276b346316b15fe05d8593217c", "text": "This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools.", "title": "" }, { "docid": "a411780d406e8b720303d18cd6c9df68", "text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.", "title": "" }, { "docid": "d462883de69e86cec8631d195a8a064d", "text": "Micro Unmanned Aerial Vehicles (UAVs) such as quadrocopters have gained great popularity over the last years, both as a research platform and in various application fields. However, some complex application scenarios call for the formation of swarms consisting of multiple drones. In this paper a platform for the creation of such swarms is presented. It is based on commercially available quadrocopters enhanced with on-board processing and communication units enabling full autonomy of individual drones. Furthermore, a generic ground control station is presented that serves as integration platform. It allows the seamless coordination of different kinds of sensor platforms.", "title": "" }, { "docid": "100ab34e96da2b8640bd97467e9c91e1", "text": "Manual work is taken over the robot technology and many of the related robot appliances are being used extensively also. Here represents the technology that proposed the working of robot for Floor cleaning. This floor cleaner robot can work in any of two modes i.e. “Automatic and Manual”. All hardware and software operations are controlled by AT89S52 microcontroller. This robot can perform sweeping and mopping task. RF modules have been used for wireless communication between remote (manual mode) and robot and having range 50m. This robot is incorporated with IR sensor for obstacle detection and automatic water sprayer pump. Four motors are used, two for cleaning, one for water pump and one for wheels. Dual relay circuit used to drive the motors one for water pump and another for cleaner. In previous work, there was no automatic water sprayer used and works only in automatic mode. In the automatic mode robot control all the operations itself and change the lane in case of hurdle detection and moves back. In the manual mode, the keypad is used to perform the expected task and to operate robot. In manual mode, RF module has been used to transmit and receive the information between remote and robot and display the information related to the hurdle detection on LCD. The whole circuitry is connected with 12V battery.", "title": "" }, { "docid": "9e32991f47d2d480ed35e488b85dfb79", "text": "Convolutional Neural Networks (CNNs) are powerful models that achieve impressive results for image classification. In addition, pre-trained CNNs are also useful for other computer vision tasks as generic feature extractors [1]. This paper aims to gain insight into the feature aspect of CNN and demonstrate other uses of CNN features. Our results show that CNN feature maps can be used with Random Forests and SVM to yield classification results that outperforms the original CNN. A CNN that is less than optimal (e.g. not fully trained or overfitting) can also extract features for Random Forest/SVM that yield competitive classification accuracy. In contrast to the literature which uses the top-layer activations as feature representation of images for other tasks [1], using lower-layer features can yield better results for classification.", "title": "" }, { "docid": "d752bf764e4518cee561b11146d951c4", "text": "Speech recognition is an increasingly important input modality, especially for mobile computing. Because errors are unavoidable in real applications, efficient correction methods can greatly enhance the user experience. In this paper we study a reranking and classification strategy for choosing word alternates to display to the user in the framework of a tap-to-correct interface. By employing a logistic regression model to estimate the probability that an alternate will offer a useful correction to the user, we can significantly reduce the average length of the alternates lists generated with no reduction in the number of words they are able to correct.", "title": "" }, { "docid": "edd78912d764ab33e0e1a8124bc7d709", "text": "Natural language understanding and dialogue policy learning are both essential in conversational systems that predict the next system actions in response to a current user utterance. Conventional approaches aggregate separate models of natural language understanding (NLU) and system action prediction (SAP) as a pipeline that is sensitive to noisy outputs of error-prone NLU. To address the issues, we propose an end-to-end deep recurrent neural network with limited contextual dialogue memory by jointly training NLU and SAP on DSTC4 multi-domain human-human dialogues. Experiments show that our proposed model significantly outperforms the state-of-the-art pipeline models for both NLU and SAP, which indicates that our joint model is capable of mitigating the affects of noisy NLU outputs, and NLU model can be refined by error flows backpropagating from the extra supervised signals of system actions.", "title": "" }, { "docid": "fe194d04f5bb78c5fa40e93fc6046b42", "text": "Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, EnglishFrench and Chinese-to-English translation tasks.", "title": "" }, { "docid": "75228d9fd5255ecb753ee3b465640d97", "text": "To pave the way towards disclosing the full potential of 5G networking, emerging Mobile Edge Computing techniques are gaining momentum in both academic and industrial research as a means to enhance infrastructure scalability and reliability by moving control functions close to the edge of the network. After the promising results under achievement within the EU Mobile Cloud Networking project, we claim the suitability of deploying Evolved Packet Core (EPC) support solutions as a Service (EPCaaS) over a uniform edge cloud infrastructure of Edge Nodes, by following the concepts of Network Function Virtualization (NFV). This paper originally focuses on the support needed for efficient elasticity provisioning of EPCaaS stateful components, by proposing novel solutions for effective subscribers' state management in quality-constrained 5G scenarios. In particular, to favor flexibility and high-availability against network function failures, we have developed a state sharing mechanism across different data centers even in presence of firewall/network encapsulation. In addition, our solution can dynamically select which state portions should be shared and to which Edge Nodes. The reported experimental results, measured over the widely recognized Open5GCore testbed, demonstrate the feasibility and effectiveness of the approach, as well as its capability to satisfy \"carrier-grade\" quality requirements while ensuring good elasticity and scalability.", "title": "" }, { "docid": "0c7eff3e7c961defce07b98914431414", "text": "The navigational system of the mammalian cortex comprises a number of interacting brain regions. Grid cells in the medial entorhinal cortex and place cells in the hippocampus are thought to participate in the formation of a dynamic representation of the animal's current location, and these cells are presumably critical for storing the representation in memory. To traverse the environment, animals must be able to translate coordinate information from spatial maps in the entorhinal cortex and hippocampus into body-centered representations that can be used to direct locomotion. How this is done remains an enigma. We propose that the posterior parietal cortex is critical for this transformation.", "title": "" }, { "docid": "cfa036aa6eb15b3634fae9a2f3f137da", "text": "We present a high-efficiency transmitter based on asymmetric multilevel outphasing (AMO). AMO transmitters improve their efficiency over LINC (linear amplification using nonlinear components) transmitters by switching the output envelopes of the power amplifiers among a discrete set of levels. This minimizes the occurrence of large outphasing angles, reducing the energy lost in the power combiner. We demonstrate this concept with a 2.5-GHz, 20-dBm peak output power transmitter using 2-level AMO designed in a 65-nm CMOS process. To the authors' knowledge, this IC is the first integrated implementation of the AMO concept. At peak output power, the measured power-added efficiency is 27.8%. For a 16-QAM signal with 6.1dB peak-to-average power ratio, the AMO prototype improves the average efficiency from 4.7% to 10.0% compared to the standard LINC system.", "title": "" }, { "docid": "af486334ab8cae89d9d8c1c17526d478", "text": "Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.", "title": "" }, { "docid": "af1f047dca3a4d7cbd75c84e5d8d1552", "text": "UNLABELLED\nAcupuncture is a therapeutic treatment that is defined as the insertion of needles into the body at specific points (ie, acupoints). Advances in functional neuroimaging have made it possible to study brain responses to acupuncture; however, previous studies have mainly concentrated on acupoint specificity. We wanted to focus on the functional brain responses that occur because of needle insertion into the body. An activation likelihood estimation meta-analysis was carried out to investigate common characteristics of brain responses to acupuncture needle stimulation compared to tactile stimulation. A total of 28 functional magnetic resonance imaging studies, which consisted of 51 acupuncture and 10 tactile stimulation experiments, were selected for the meta-analysis. Following acupuncture needle stimulation, activation in the sensorimotor cortical network, including the insula, thalamus, anterior cingulate cortex, and primary and secondary somatosensory cortices, and deactivation in the limbic-paralimbic neocortical network, including the medial prefrontal cortex, caudate, amygdala, posterior cingulate cortex, and parahippocampus, were detected and assessed. Following control tactile stimulation, weaker patterns of brain responses were detected in areas similar to those stated above. The activation and deactivation patterns following acupuncture stimulation suggest that the hemodynamic responses in the brain simultaneously reflect the sensory, cognitive, and affective dimensions of pain.\n\n\nPERSPECTIVE\nThis article facilitates a better understanding of acupuncture needle stimulation and its effects on specific activity changes in different brain regions as well as its relationship to the multiple dimensions of pain. Future studies can build on this meta-analysis and will help to elucidate the clinically relevant therapeutic effects of acupuncture.", "title": "" }, { "docid": "feeb5741fae619a37f44eae46169e9d1", "text": "A 24-GHz novel active quasi-circulator is developed in TSMC 0.18-µm CMOS. We proposed a new architecture by using the canceling mechanism to achieve high isolations and reduce the circuit area. The measured insertion losses |S<inf>32</inf>| and |S<inf>21</inf>| are 9 and 8.5 dB, respectively. The isolation |S<inf>31</inf>| is greater than 30 dB. The dc power consumption is only 9.12 mW with a chip size of 0.35 mm<sup>2</sup>.", "title": "" }, { "docid": "bf6d56c2fd716802b8e2d023f86a4225", "text": "This is the first case report to demonstrate the efficacy of immersive computer-generated virtual reality (VR) and mixed reality (touching real objects which patients also saw in VR) for the treatment of spider phobia. The subject was a 37-yr-old female with severe and incapacitating fear of spiders. Twelve weekly 1-hr sessions were conducted over a 3-month period. Outcome was assessed on measures of anxiety, avoidance, and changes in behavior toward real spiders. VR graded exposure therapy was successful for reducing fear of spiders providing converging evidence for a growing literature showing the effectiveness of VR as a new medium for exposure therapy.", "title": "" } ]
scidocsrr
16a707893095f361f70f43871bf7d077
DeepCredit: Exploiting User Cickstream for Loan Risk Prediction in P2P Lending
[ { "docid": "ad0688b0c80cf6eeed13a2a9b112f97c", "text": "P2P lending is an emerging Internet-based application where individuals can directly borrow money from each other. The past decade has witnessed the rapid development and prevalence of online P2P lending platforms, examples of which include Prosper, LendingClub, and Kiva. Meanwhile, extensive research has been done that mainly focuses on the studies of platform mechanisms and transaction data. In this article, we provide a comprehensive survey on the research about P2P lending, which, to the best of our knowledge, is the first focused effort in this field. Specifically, we first provide a systematic taxonomy for P2P lending by summarizing different types of mainstream platforms and comparing their working mechanisms in detail. Then, we review and organize the recent advances on P2P lending from various perspectives (e.g., economics and sociology perspective, and data-driven perspective). Finally, we propose our opinions on the prospects of P2P lending and suggest some future research directions in this field. Meanwhile, throughout this paper, some analysis on real-world data collected from Prosper and Kiva are also conducted.", "title": "" }, { "docid": "fb223abb83654f316da33d9c97f3173f", "text": "Online peer-to-peer (P2P) lending services are a new type of social platform that enables individuals borrow and lend money directly from one to another. In this paper, we study the dynamics of bidding behavior in a P2P loan auction website, Prosper.com. We investigate the change of various attributes of loan requesting listings over time, such as the interest rate and the number of bids. We observe that there is herding behavior during bidding, and for most of the listings, the numbers of bids they receive reach spikes at very similar time points. We explain these phenomena by showing that there are economic and social factors that lenders take into account when deciding to bid on a listing. We also observe that the profits the lenders make are tied with their bidding preferences. Finally, we build a model based on the temporal progression of the bidding, that reliably predicts the success of a loan request listing, as well as whether a loan will be paid back or not.", "title": "" } ]
[ { "docid": "bf9ba92f1c7aa2ae4ed32dd270552eb0", "text": "Video-based person re-identification (re-id) is a central application in surveillance systems with significant concern in security. Matching persons across disjoint camera views in their video fragments is inherently challenging due to the large visual variations and uncontrolled frame rates. There are two steps crucial to person re-id, namely discriminative feature learning and metric learning. However, existing approaches consider the two steps independently, and they do not make full use of the temporal and spatial information in videos. In this paper, we propose a Siamese attention architecture that jointly learns spatiotemporal video representations and their similarity metrics. The network extracts local convolutional features from regions of each frame, and enhance their discriminative capability by focusing on distinct regions when measuring the similarity with another pedestrian video. The attention mechanism is embedded into spatial gated recurrent units to selectively propagate relevant features and memorize their spatial dependencies through the network. The model essentially learns which parts (where) from which frames (when) are relevant and distinctive for matching persons and attaches higher importance therein. The proposed Siamese model is end-to-end trainable to jointly learn comparable hidden representations for paired pedestrian videos and their similarity value. Extensive experiments on three benchmark datasets show the effectiveness of each component of the proposed deep network while outperforming state-of-the-art methods.", "title": "" }, { "docid": "2bfeadedeb38d1a923779f036b305906", "text": "A monolithic high-resolution (individual pixel size 300times300 mum2) active matrix (AM) programmed 8times8 micro-LED array was fabricated using flip-chip technology. The display was composed of an AM panel and a LED microarray. The AM panel included driving circuits composed of p-type MOS transistors for each pixel. The n-electrodes of the LED pixels in the microarray were connected together, and the p-electrodes were connected to individual outputs of the driving circuits on the AM panel. Using flip-chip technology, the LED microarray was then flipped onto the AM panel to create a microdisplay.", "title": "" }, { "docid": "27d7f7935c235a3631fba6e3df08f623", "text": "We investigate the task of Named Entity Recognition (NER) in the domain of biomedical text. There is little published work employing modern neural network techniques in this domain, probably due to the small sizes of human-labeled data sets, as non-trivial neural models would have great difficulty avoiding overfitting. In this work we follow a semi-supervised learning approach: We first train state-of-the art (deep) neural networks on a large corpus of noisy machine-labeled data, then “transfer” and fine-tune the learned model on two higher-quality humanlabeled data sets. This approach yields higher performance than the current best published systems for the class DISEASE. It trails but is not far from the currently best systems for the class CHEM.", "title": "" }, { "docid": "7437f0c8549cb8f73f352f8043a80d19", "text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.", "title": "" }, { "docid": "065ca3deb8cb266f741feb67e404acb5", "text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet", "title": "" }, { "docid": "8b15435562b287eb97a6c573222797ec", "text": "Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.", "title": "" }, { "docid": "5a5b30b63944b92b168de7c17d5cdc5e", "text": "We introduce the Densely Segmented Supermarket (D2S) dataset, a novel benchmark for instance-aware semantic segmentation in an industrial domain. It contains 21 000 high-resolution images with pixel-wise labels of all object instances. The objects comprise groceries and everyday products from 60 categories. The benchmark is designed such that it resembles the real-world setting of an automatic checkout, inventory, or warehouse system. The training images only contain objects of a single class on a homogeneous background, while the validation and test sets are much more complex and diverse. To further benchmark the robustness of instance segmentation methods, the scenes are acquired with different lightings, rotations, and backgrounds. We ensure that there are no ambiguities in the labels and that every instance is labeled comprehensively. The annotations are pixel-precise and allow using crops of single instances for articial data augmentation. The dataset covers several challenges highly relevant in the field, such as a limited amount of training data and a high diversity in the test and validation sets. The evaluation of state-of-the-art object detection and instance segmentation methods on D2S reveals significant room for improvement.", "title": "" }, { "docid": "1f6d0e820b169d13e961b672b75bde71", "text": "Prenatal stress can cause long-term effects on cognitive functions in offspring. Hippocampal synaptic plasticity, believed to be the mechanism underlying certain types of learning and memory, and known to be sensitive to behavioral stress, can be changed by prenatal stress. Whether enriched environment treatment (EE) in early postnatal periods can cause a recovery from these deficits is unknown. Experimental animals were Wistar rats. Prenatal stress was evoked by 10 foot shocks (0.8 mA for 1s, 2-3 min apart) in 30 min per day at gestational day 13-19. After weaning at postnatal day 22, experimental offspring were given the enriched environment treatment through all experiments until tested (older than 52 days age). Electrophysiological and Morris water maze testing was performed at 8 weeks of age. The results showed that prenatal stress impaired long-term potentiation (LTP) but facilitated long-term depression (LTD) in the hippocampal CA1 region in the slices. Furthermore, prenatal stress exacerbated the effects of acute stress on hippocampal LTP and LTD, and also impaired spatial learning and memory in the Morris water maze. However, all these deficits induced by prenatal stress were recovered by enriched environment treatment. This work observes a phenomenon that may contribute to the understanding of clinically important interactions among cognitive deficit, prenatal stress and enriched environment treatment. Enriched environment treatment on early postnatal periods may be one potentially important target for therapeutic interventions in preventing the prenatal stress-induced cognitive disorders.", "title": "" }, { "docid": "c9df206d8c0bc671f3109c1c7b12b149", "text": "Internet of Things (IoT) — a unified network of physical objects that can change the parameters of the environment or their own, gather information and transmit it to other devices. It is emerging as the third wave in the development of the internet. This technology will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. The IoT is enabled by the latest developments, smart sensors, communication technologies, and Internet protocols. This article contains a description of lnternet of things (IoT) networks. Much attention is given to prospects for future of using IoT and it's development. Some problems of development IoT are were noted. The article also gives valuable information on building(construction) IoT systems based on PLC technology.", "title": "" }, { "docid": "61b89a2be8b2acc34342dfcc0249f4d5", "text": "Transfer-learning and meta-learning are two effective methods to apply knowledge learned from large data sources to new tasks. In few-class, few-shot target task settings (i.e. when there are only a few classes and training examples available in the target task), meta-learning approaches that optimize for future task learning have outperformed the typical transfer approach of initializing model weights from a pre-trained starting point. But as we experimentally show, meta-learning algorithms that work well in the few-class setting do not generalize well in many-shot and many-class cases. In this paper, we propose a joint training approach that combines both transfer-learning and meta-learning. Benefiting from the advantages of each, our method obtains improved generalization performance on unseen target tasks in both fewand many-class and fewand manyshot scenarios.", "title": "" }, { "docid": "3a3a2261e1063770a9ccbd0d594aa561", "text": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.", "title": "" }, { "docid": "1bef1c66ac1e6e052f5751b11808d9d6", "text": "There is a growing trend towards attacks on database privacy due to great value of privacy information stored in big data set. Public's privacy are under threats as adversaries are continuously cracking their popular targets such as bank accounts. We find a fact that existing models such as K-anonymity, group records based on quasi-identifiers, which harms the data utility a lot. Motivated by this, we propose a sensitive attribute-based privacy model. Our model is the early work of grouping records based on sensitive attributes instead of quasi-identifiers which is popular in existing models. Random shuffle is used to maximize information entropy inside a group while the marginal distribution maintains the same before and after shuffling, therefore, our method maintains a better data utility than existing models. We have conducted extensive experiments which confirm that our model can achieve a satisfying privacy level without sacrificing data utility while guarantee a higher efficiency.", "title": "" }, { "docid": "14dd650afb3dae58ffb1a798e065825a", "text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.", "title": "" }, { "docid": "da6771ebd128ce1dc58f2ab1d56b065f", "text": "We present a method for the automatic classification of text documents into a dynamically defined set of topics of interest. The proposed approach requires only a domain ontology and a set of user-defined classification topics, specified as contexts in the ontology. Our method is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology sub-graphs resulting from the projection of the defined contexts. The domain ontology effectively becomes the classifier, where classification topics are expressed using the defined ontological contexts. In contrast to the traditional supervised categorization methods, the proposed method does not require a training set of documents. More importantly, our approach allows dynamically changing the classification topics without retraining of the classifier. In our experiments, we used the English language Wikipedia converted to an RDF ontology to categorize a corpus of current Web news documents into selection of topics of interest. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method, as well as the applicability of Wikipedia for semantic text categorization purposes.", "title": "" }, { "docid": "a32d6897d74397f5874cc116221af207", "text": "A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labelled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.", "title": "" }, { "docid": "f39abb67a6c392369c5618f5c33d93cf", "text": "In our research, we view human behavior as a structured sequence of context-sensitive decisions. We develop a conditional probabilistic model for predicting human decisions given the contextual situation. Our approach employs the principle of maximum entropy within the Markov Decision Process framework. Modeling human behavior is reduced to recovering a context-sensitive utility function that explains demonstrated behavior within the probabilistic model. In this work, we review the development of our probabilistic model (Ziebart et al. 2008a) and the results of its application to modeling the context-sensitive route preferences of drivers (Ziebart et al. 2008b). We additionally expand the approach’s applicability to domains with stochastic dynamics, present preliminary experiments on modeling time-usage, and discuss remaining challenges for applying our approach to other human behavior modeling problems.", "title": "" }, { "docid": "bdfc21b5ae86711f093806b976258d33", "text": "A generic and robust approach for the detection of road vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present a novel approach to the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers (a disjunctive set of cascades). Our approach facilitates the realtime detection of both static and moving vehicles independent of orientation, colour, type and configuration. The results presented show the successful detection of differing vehicle types under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. The technique is realised on aerial imagery obtained at 1Hz from an optical camera on the medium UAV B-MAV platform with results presented to include those from the MoD Grand Challenge 2008.", "title": "" }, { "docid": "396ce5ec8ef03a55ed022c4b580531bb", "text": "BACKGROUND\nThe aim of this study was to evaluate if the presence of a bovine aortic arch (BAA)- the most common aortic arch anomaly-influences the location of the primary entry tear, the surgical procedure, and the outcome of patients undergoing operation for type A acute aortic dissection (AAD).\n\n\nMETHODS\nA total of 157 patients underwent emergency operations because of AAD (71% men, mean age 59.5 ± 13 years). Preoperative computed tomographic scans were screened for the presence of BAA. Patients were separated into 2 groups: presenting with BAA (BAA+, n = 22) or not (BAA-, n = 135). Location of the primary tear, surgical treatment, outcome, and risk factors for postoperative neurologic injury and in-hospital mortality were analyzed.\n\n\nRESULTS\nFourteen percent (22 of 157) of all patients operated on for AAD had a concomitant BAA. Location of the primary entry tear was predominantly in the aortic arch in patients with BAA (BAA+, 59.1% versus BAA-, 13.3%; p < 0.001). Multivariate analysis revealed the presence of a BAA to be an independent risk factor for having the primary tear in the aortic arch (odds ratio [OR], 14.79; 95% confidence interval [CI] 4.54-48.13; p < 0.001) but not for in-hospital mortality. Patients with BAA had a higher rate of postoperative neurologic injury (BAA+, 35% versus BAA-, 7.9%; p = 0.004). Multivariate analysis identified the presence of BAA as an independent risk factor for postoperative neurologic injury (OR, 4.9; 95% CI, 1.635-14.734; p = 0.005).\n\n\nCONCLUSIONS\nIn type A AAD, the presence of a BAA predicts the location of the primary entry site in the aortic arch and is an independent risk factor for a poor neurologic outcome.", "title": "" }, { "docid": "0580342f7efb379fc417d2e5e48c4b73", "text": "The use of System Dynamics Modeling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.", "title": "" } ]
scidocsrr
49cfc1193997985c8b7c247f67287fc6
Forecasting daily lake levels using artificial intelligence approaches
[ { "docid": "00b8207e783aed442fc56f7b350307f6", "text": "A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.", "title": "" } ]
[ { "docid": "a7b9505a029e58531f250c5728dbeef4", "text": "This paper proposes an object recognition approach intended for extracting, analyzing and clustering of features from RGB image views from given objects. Extracted features are matched with features in learned object models and clustered in Hough-space to find a consistent object pose. Hypotheses for valid poses are verified by computing a homography from detected features. Using that homography features are back projected onto the input image and the resulting area is checked for possible presence of other objects. This approach is applied by our team homer[at]UniKoblenz in the RoboCup[at]Home league. Besides the proposed framework, this work offers the computer vision community with online programs available as open source software.", "title": "" }, { "docid": "43044459a273dafa29dccdfc0cf90734", "text": "The principles and practices that guide the design and development of test items are changing because our assessment practices are changing. Educational visionary Randy Bennett (2001) anticipated that computers and the Internet would become two of the most powerful forces of change in educational measurement. Bennett’s premonition was spot-on. Internet-based computerized testing has dramatically changed educational measurement because test administration procedures combined with the growing popularity of digital media and the explosion in Internet use have created the foundation for different types of tests and test items. As a result, many educational tests that were once given in a paper format are now administered by computer using the Internet. Many common and wellknown exams in the domain of certification and licensure testing can be cited as examples, including the Graduate Management Achievement Test (GMAT), the Graduate Record Exam (GRE), the Test of English as a Foreign Language (TOEFL iBT), the American Institute of Certified Public Accountants Uniform CPA examination (CBT-e), the Medical Council of Canada Qualifying Exam Part I (MCCQE I), the National Council Licensure Examination for Registered Nurses (NCLEX-RN) and the National Council Licensure Examination for Practical Nurses (NCLEX-PN). This rapid transition to computerized testing is also occurring in K–12 education. As early as 2009, Education Week’s “Technology Counts” reported that educators in more than half of the U.S. states—where 49 of the 50 states at that time had educational achievement testing—administer some form of computerized testing. The move toward Common Core State Standards will only accelerate this transition given that the two largest consortiums, PARCC and SMARTER Balance, are using technology to develop and deliver computerized tests and to design constructed-response items and performance-based tasks that will be scored using computer algorithms. Computerized testing offers many advantages to examinees and examiners compared to more traditional paper-based tests. For instance, computers support the development of technology-enhanced item types that allow examiners to use more diverse item formats and measure a broader range of knowledge and skills. Computer algorithms can also be developed so these new item types are scored automatically and with limited human intervention, thereby eliminating the need for costly and timeconsuming marking and scoring sessions. Because items are scored immediately, examinees receive instant feedback on their strengths and weaknesses. Computerized tests also permit continuous and on-demand administration, thereby allowing examinees to have more choice about where and when they write their exams. But the advent of computerized testing has also raised new challenges, particularly in the area of item development. Large numbers of items are needed to support the banks necessary for computerized 21 AUTOMATIC ITEM GENERATION", "title": "" }, { "docid": "aa2b1a8d0cf511d5862f56b47d19bc6a", "text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:", "title": "" }, { "docid": "d057eece8018a905fe1642a1f40de594", "text": "6 Abstract— Removal of noise from the original signal is still a bottleneck for researchers. There are several methods and techniques published and each method has its own advantages, disadvantages and assumptions. This paper presents a review of some significant work in the field of Image Denoising.The brief introduction of some popular approaches is provided and discussed. Insights and potential future trends are also discussed", "title": "" }, { "docid": "1593fd6f9492adc851c709e3dd9b3c5f", "text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.", "title": "" }, { "docid": "48be442dfe31fbbbefb6fbf0833112fb", "text": "When documents and queries are presented in different languages, the common approach is to translate the query into the document language. While there are a variety of query translation approaches, recent research suggests that combining multiple methods into a single ”structured query” is the most effective. In this paper, we introduce a novel approach for producing a unique combination recipe for each query, as it has also been shown that the optimal combination weights differ substantially across queries and other task specifics. Our query-specific combination method generates statistically significant improvements over other combination strategies presented in the literature, such as uniform and task-specific weighting. An in-depth empirical analysis presents insights about the effect of data size, domain differences, labeling and tuning on the end performance of our approach.", "title": "" }, { "docid": "9f11eb476ab0ae5a353fb0279ea4697d", "text": "This paper presents a relaxation oscillator that utilizes a supply-stabilized pico-powered voltage and current reference (VCRG) to charge and reset a chopped pair of MIM capacitors at sub-nW power levels. Specifically, a temperature- and line-stabilized reference voltage is generated via a 4-transistor (4T) self-regulated structure, the output of which is used to bias a temperature-compensated gate-leakage transistor to generate a stabilized current reference. The reference current is then used to charge a swapping pair of MIM capacitors to compare to the voltage generated by the same VCRG in a relaxation topology. The design is fabricated in 65 nm CMOS, and 14 measured samples yield a reference voltage of 147.1 mV achieving a temperature coefficient of 364 ppm/°C and a line regulation of 0.21%/V, and a reference current of 10.2 pA achieving a temperature coefficient of 1077.3 ppm/°C and a line regulation of 1.79%/V (all numbers averaged across all samples). The proposed VCRG-based relaxation oscillator achieves an average temperature coefficient of 999.9 ppm/°C from −40 to 120° C and a line regulation of 1.6%/V from 0.6 to 1.1 V, all at a system power consumption of 124.2 pW at 20° C.", "title": "" }, { "docid": "d610f7d468fe2f28637f4aeb95948cd6", "text": "A computational model is described in which the sizes of variables are represented by the explicit times at which action potentials occur, rather than by the more usual 'firing rate' of neurons. The comparison of patterns over sets of analogue variables is done by a network using different delays for different information paths. This mode of computation explains how one scheme of neuroarchitecture can be used for very different sensory modalities and seemingly different computations. The oscillations and anatomy of the mammalian olfactory systems have a simple interpretation in terms of this representation, and relate to processing in the auditory system. Single-electrode recording would not detect such neural computing. Recognition 'units' in this style respond more like radial basis function units than elementary sigmoid units.", "title": "" }, { "docid": "37f55e03f4d1ff3b9311e537dc7122b5", "text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.", "title": "" }, { "docid": "6b9a25385c44fcef85a0e1725f7ff0c2", "text": "Placement of interior node points is a crucial step in the generation of quality meshes in sweeping algorithms. Two new algorithms were devised for node point placement and implemented in Sweep Tool, the first based on the use of linear transformations between bounding node loops and the second based on smoothing. Examples are given that demonstrate the effectiveness of these algorithms.", "title": "" }, { "docid": "ca3ea61314d43abeac81546e66ff75e4", "text": "OBJECTIVE\nTo describe and discuss the process used to write a narrative review of the literature for publication in a peer-reviewed journal. Publication of narrative overviews of the literature should be standardized to increase their objectivity.\n\n\nBACKGROUND\nIn the past decade numerous changes in research methodology pertaining to reviews of the literature have occurred. These changes necessitate authors of review articles to be familiar with current standards in the publication process.\n\n\nMETHODS\nNarrative overview of the literature synthesizing the findings of literature retrieved from searches of computerized databases, hand searches, and authoritative texts.\n\n\nDISCUSSION\nAn overview of the use of three types of reviews of the literature is presented. Step by step instructions for how to conduct and write a narrative overview utilizing a 'best-evidence synthesis' approach are discussed, starting with appropriate preparatory work and ending with how to create proper illustrations. Several resources for creating reviews of the literature are presented and a narrative overview critical appraisal worksheet is included. A bibliography of other useful reading is presented in an appendix.\n\n\nCONCLUSION\nNarrative overviews can be a valuable contribution to the literature if prepared properly. New and experienced authors wishing to write a narrative overview should find this article useful in constructing such a paper and carrying out the research process. It is hoped that this article will stimulate scholarly dialog amongst colleagues about this research design and other complex literature review methods.", "title": "" }, { "docid": "4229efa8c62e28794bd2eae055eb1449", "text": "The rapid growth of e-commerce is imposing profound impacts on modern society. On the supply side, the emergence of e-commerce is greatly changing the operation behavior of some retailers and is increasing product internationalization due to its geographically unlimited nature. On the demand side, the pervasiveness of e-commerce affects how, where, and when consumers shop, and indirectly influences the way in which we live our lives. However, the development of e-commerce is still in an early stage, and why consumers choose (or do not choose) online purchasing is far from being completely understood. To better evaluate and anticipate those profound impacts of e-commerce, therefore, it is important to further refine our understanding of consumers' e-shopping behavior. A number of studies have investigated e-shopping behavior, and reviewing them is valuable for further improving our understanding. This report aims to summarize previous e-shopping research in a systematic way. In this review, we are interested primarily in the potential benefits and costs that the internet offers for the business-to-consumer segment of e-commerce in the transaction (purchase) channel. An overview of the 65 empirical studies analyzed in this report is provided in the Appendix. Most previous studies fall into one or more of several theoretical frameworks, including the theory of reasoned action, the theory of planned behavior, the technology acceptance model, transaction cost theory, innovation diffusion theory, and others. Among them, social psychological theories (the theory of reasoned action, the theory of planned behavior, the technology acceptance model) were widely applied. As shown in the applications of different theories, e-shopping behavior is not a simple decision process, and thus an integration of various theories is necessary to deal with its complexities. We suggest synthesizing these theories through the development of a comprehensive list of benefits and costs, using each of the key constructs of the pertinent theories as a guide to identifying the nature of those benefits and costs. The dependent variables mainly include e-shopping intention and actual e-shopping behavior (a few studies used attitudes toward e-shopping). E-shopping intention was measured by various dimensions. Among them, the directly-stated intention to purchase online is the most frequently used measure. Although some studies used a unidimensional measure, most adopted a latent construct to assess consumers' e-shopping intentions. Actual e-shopping behavior mainly includes three dimensions: adoption, spending, and frequency. Most studies examined one or more of these three dimensions directly, while a few studies constructed a latent …", "title": "" }, { "docid": "eb8bdb2a401f2a1233118e53430ac6c1", "text": "The two main research branches in intelligent vehicles field are Advanced Driver Assistance Systems (ADAS) [1] and autonomous driving [2]. ADAS generally work on predefined enviroment and limited scenarios such as highway driving, low speed driving, night driving etc. In such situations this systems have sufficiently high performance and the main features that allow their large diffusion and that have enabled commercialization in this years are the low cost, the small size and the easy integration into the vehicle. Autonomous vehicle, on the other hand, should be ready to work over all-scenarios, all-terrain and all-wheather conditions, but nowadays autonomous vehicle are used in protected and structured enviroments or military applications [3], [4]. Generally many differences between ADAS and autonomous vehicles, both hardware and software features, are related on cost and integration: ADAS are embedded into vehicles and might be low cost; on the other hand usually are not heavy limitations on cost and integration related to autonomous vehicles. Obviosly, the main difference is the presence/absence of the driver. Otherwise, most of the undelying ideas are shared, such as perception, planning, actuation needed in this kind of systems.", "title": "" }, { "docid": "6ebd75996b8a652720b23254c9d77be4", "text": "This paper focuses on a biometric cryptosystem implementation and evaluation based on a number of fingerprint texture descriptors. The texture descriptors, namely, the Gabor filter-based FingerCode, a local binary pattern (LBP), and a local direction pattern (LDP), and their various combinations are considered. These fingerprint texture descriptors are binarized using a biometric discretization method and used in a fuzzy commitment scheme (FCS). We constructed the biometric cryptosystems, which achieve a good performance, by fusing discretized fingerprint texture descriptors and using effective error-correcting codes. We tested the proposed system on a FVC2000 DB2a fingerprint database, and the results demonstrate that the new system significantly improves the performance of the FCS for texture-based", "title": "" }, { "docid": "cfa58ab168beb2d52fe6c2c47488e93a", "text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.", "title": "" }, { "docid": "a4957c88aee24ee9223afea8b01a8a62", "text": "This study examined smartphone user behaviors and their relation to self-reported smartphone addiction. Thirty-four users who did not own smartphones were given instrumented iPhones that logged all phone use over the course of the year-long study. At the conclusion of the study, users were asked to rate their level of addiction to the device. Sixty-two percent agreed or strongly agreed that they were addicted to their iPhones. These users showed differentiated smartphone use as compared to those users who did not indicate an addiction. Addicted users spent twice as much time on their phone and launched applications much more frequently (nearly twice as often) as compared to the non-addicted user. Mail, Messaging, Facebook and the Web drove this use. Surprisingly, Games did not show any difference between addicted and nonaddicted users. Addicted users showed significantly lower time-per-interaction than did non-addicted users for Mail, Facebook and Messaging applications. One addicted user reported that his addiction was problematic, and his use data was beyond three standard deviations from the upper hinge. The implications of the relationship between the logged and self-report data are discussed.", "title": "" }, { "docid": "c3af6eae1bd5f2901914d830280eca48", "text": "This paper proposes a novel approach for the classification of 3D shapes exploiting surface and volumetric clues inside a deep learning framework. The proposed algorithm uses three different data representations. The first is a set of depth maps obtained by rendering the 3D object. The second is a novel volumetric representation obtained by counting the number of filled voxels along each direction. Finally NURBS surfaces are fitted over the 3D object and surface curvature parameters are selected as the third representation. All the three data representations are fed to a multi-branch Convolutional Neural Network. Each branch processes a different data source and produces a feature vector by using convolutional layers of progressively reduced resolution. The extracted feature vectors are fed to a linear classifier that combines the outputs in order to get the final predictions. Experimental results on the ModelNet dataset show that the proposed approach is able to obtain a state-of-the-art performance.", "title": "" }, { "docid": "c43ad751dade7d0a5a396f95cc904030", "text": "The electric grid is radically evolving and transforming into the smart grid, which is characterized by improved energy efficiency and manageability of available resources. Energy management (EM) systems, often integrated with home automation systems, play an important role in the control of home energy consumption and enable increased consumer participation. These systems provide consumers with information about their energy consumption patterns and help them adopt energy-efficient behavior. The new generation EM systems leverage advanced analytics and communication technologies to offer consumers actionable information and control features, while ensuring ease of use, availability, security, and privacy. In this article, we present a survey of the state of the art in EM systems, applications, and frameworks. We define a set of requirements for EM systems and evaluate several EM systems in this context. We also discuss emerging trends in this area.", "title": "" }, { "docid": "71e8c35e0f0b5756d14821622a8d0fc5", "text": "Classic drugs of abuse lead to specific increases in cerebral functional activity and dopamine release in the shell of the nucleus accumbens (the key neural structure for reward, motivation, and addiction). In contrast, caffeine at doses reflecting daily human consumption does not induce a release of dopamine in the shell of the nucleus accumbens but leads to a release of dopamine in the prefrontal cortex, which is consistent with its reinforcing properties.", "title": "" } ]
scidocsrr
f2cf673fdb691fb7a3f142338ff21b81
Measuring Online Learning Systems Success: Applying the Updated DeLone and McLean Model
[ { "docid": "a8699e1ed8391e5a55fbd79ae3ac0972", "text": "The benefits of an e-learning system will not be maximized unless learners use the system. This study proposed and tested alternative models that seek to explain student intention to use an e-learning system when the system is used as a supplementary learning tool within a traditional class or a stand-alone distance education method. The models integrated determinants from the well-established technology acceptance model as well as system and participant characteristics cited in the research literature. Following a demonstration and use phase of the e-learning system, data were collected from 259 college students. Structural equation modeling provided better support for a model that hypothesized stronger effects of system characteristics on e-learning system use. Implications for both researchers and practitioners are discussed. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1c0efa706f999ee0129d21acbd0ef5ab", "text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN", "title": "" } ]
[ { "docid": "73d09f005f9335827493c3c47d02852b", "text": "Multiprotocol Label Switched Networks need highly intelligent controls to manage high volume traffic due to issues of traffic congestion and best path selection. The work demonstrated in this paper shows results from simulations for building optimal fuzzy based algorithm for traffic splitting and congestion avoidance. The design and implementation of Fuzzy based software defined networking is illustrated by introducing the Fuzzy Traffic Monitor in an ingress node. Finally, it displays improvements in the terms of mean delay (42.0%) and mean loss rate (2.4%) for Video Traffic. Then, the resu1t shows an improvement in the terms of mean delay (5.4%) and mean loss rate (3.4%) for Data Traffic and an improvement in the terms of mean delay(44.9%) and mean loss rate(4.1%) for Voice Traffic as compared to default MPLS implementation. Keywords—Multiprotocol Label Switched Networks; Fuzzy Traffic Monitor; Network Simulator; Ingress; Traffic Splitting; Fuzzy Logic Control System; Label setup System; Traffic Splitting System", "title": "" }, { "docid": "0241cef84d46b942ee32fc7345874b90", "text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.", "title": "" }, { "docid": "f66dfbbd6d2043744d32b44dba145ef2", "text": "Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge for traditional collaborative filtering-based recommender systems. The problem becomes more challenging when people travel to a new city where they have no activity history.\n In this paper, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item co-occurrence patterns and exploiting item contents. The online recommendation part automatically combines the learnt interest of the querying user and the local preference of the querying city to produce the top-k recommendations. To speed up this online process, a scalable query processing technique is developed by extending the classic Threshold Algorithm (TA). We evaluate the performance of our recommender system on two large-scale real data sets, DoubanEvent and Foursquare. The results show the superiority of LCARS in recommending spatial items for users, especially when traveling to new cities, in terms of both effectiveness and efficiency.", "title": "" }, { "docid": "7c0b7d55abdd6cce85730dbf1cd02109", "text": "Suppose fx, h , ■ • ■ , fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being h\\ , h2, •• -, A* respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fx ,f2, • • • ,fk ; N) denote the number of positive integers n between 1 and N inclusive such that /i(n), f2(n), • ■ ■ , fk(n) are all primes. (We ignore the finitely many values of n for which some /,(n) is negative.) Then heuristically we would expect to have for N large", "title": "" }, { "docid": "f555a50f629bd9868e1be92ebdcbc154", "text": "The transformation of traditional energy networks to smart grids revolutionizes the energy industry in terms of reliability, performance, and manageability by providing bi-directional communications to operate, monitor, and control power flow and measurements. However, communication networks in smart grid bring increased connectivity with increased severe security vulnerabilities and challenges. Smart grid can be a prime target for cyber terrorism because of its critical nature. As a result, smart grid security is already getting a lot of attention from governments, energy industries, and consumers. There have been several research efforts for securing smart grid systems in academia, government and industries. This article provides a comprehensive study of challenges in smart grid security, which we concentrate on the problems and proposed solutions. Then, we outline current state of the research and future perspectives.With this article, readers can have a more thorough understanding of smart grid security and the research trends in this topic.", "title": "" }, { "docid": "ca51d7c9c4a764dbb2f8f01adf3f3b5a", "text": "Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.", "title": "" }, { "docid": "b8de76afab03ad223fb4713b214e3fec", "text": "Companies facing new requirements for governance are scrambling to buttress financial-reporting systems, overhaul board structures--whatever it takes to comply. But there are limits to how much good governance can be imposed from the outside. Boards know what they ought to be: seats of challenge and inquiry that add value without meddling and make CEOs more effective but not all-powerful. A board can reach that goal only if it functions as a high-performance team, one that is competent, coordinated, collegial, and focused on an unambiguous goal. Such entities don't just evolve; they must be constructed to an exacting blueprint--what the author calls board building. In this article, Nadler offers an agenda and a set of tools that boards can use to define and achieve their objectives. It's important for a board to conduct regular self-assessments and to pay attention to the results of those analyses. As a first step, the directors and the CEO should agree on which of the following common board models best fits the company: passive, certifying, engaged, intervening, or operating. The directors and the CEO should then analyze which business tasks are most important and allot sufficient time and resources to them. Next, the board should take inventory of each director's strengths to ensure that the group as a whole possesses the skills necessary to do its work. Directors must exert more influence over meeting agendas and make sure they have the right information at the right time and in the right format to perform their duties. Finally, the board needs to foster an engaged culture characterized by candor and a willingness to challenge. An ambitious board-building process, devised and endorsed both by directors and by management, can potentially turn a good board into a great one.", "title": "" }, { "docid": "dde00778c4d9a3123317840eb001df54", "text": "The ability to generate heat under an alternating magnetic field (AMF) makes magnetic iron oxide nanoparticles (MIONs) an ideal heat source for biomedical applications including cancer thermoablative therapy, tissue preservation, and remote control of cell function. However, there is a lack of quantitative understanding of the mechanisms governing heat generation of MIONs, and the optimal nanoparticle size for magnetic fluid heating (MFH) applications. Here, we show that MIONs with large sizes (>20 nm) have a specific absorption rate (SAR) significantly higher than that predicted by the widely used linear theory of MFH. The heating efficiency of MIONs in both the superparamagnetic and ferromagnetic regimes increased with size, which can be accurately characterized with a modified dynamic hysteresis model. In particular, the 40 nm ferromagnetic nanoparticles have an SAR value approaching the theoretical limit under a clinically relevant AMF. An in vivo study further demonstrated that the 40 nm MIONs could effectively heat tumor tissues at a minimal dose. Our experimental results and theoretical analysis on nanoparticle heating offer important insight into the rationale design of MION-based MFH for therapeutic applications.", "title": "" }, { "docid": "b08027d8febf1d7f8393b9934739847d", "text": "Sarcasm is generally characterized as a figure of speech that involves the substitution of a literal by a figurative meaning, which is usually the opposite of the original literal meaning. We re-frame the sarcasm detection task as a type of word sense disambiguation problem, where the sense of a word is either literal or sarcastic. We call this the Literal/Sarcastic Sense Disambiguation (LSSD) task. We address two issues: 1) how to collect a set of target words that can have either literal or sarcastic meanings depending on context; and 2) given an utterance and a target word, how to automatically detect whether the target word is used in the literal or the sarcastic sense. For the latter, we investigate several distributional semantics methods and show that a Support Vector Machines (SVM) classifier with a modified kernel using word embeddings achieves a 7-10% F1 improvement over a strong lexical baseline.", "title": "" }, { "docid": "92e150f30ae9ef371ffdd7160c84719d", "text": "Categorization is a vitally important skill that people use every day. Early theories of category learning assumed a single learning system, but recent evidence suggests that human category learning may depend on many of the major memory systems that have been hypothesized by memory researchers. As different memory systems flourish under different conditions, an understanding of how categorization uses available memory systems will improve our understanding of a basic human skill, lead to better insights into the cognitive changes that result from a variety of neurological disorders, and suggest improvements in training procedures for complex categorization tasks.", "title": "" }, { "docid": "289694f2395a6a2afc7d86d475b9c02d", "text": "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.", "title": "" }, { "docid": "b32218abeff9a34c3e89eac76b8c6a45", "text": "The reliability and availability of distributed services can be ensured using replication. We present an architecture and an algorithm for Byzantine fault-tolerant state machine replication. We explore the benefits of virtualization to reliably detect and tolerate faulty replicas, allowing the transformation of Byzantine faults into omission faults. Our approach reduces the total number of physical replicas from 3f+1 to 2f+1. It is based on the concept of twin virtual machines, which involves having two virtual machines in each physical host, each one acting as failure detector of the other.", "title": "" }, { "docid": "a3f06bfcc2034483cac3ee200803878c", "text": "This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.", "title": "" }, { "docid": "8d9a55b7d730d9acbff50aef4f55808b", "text": "Interactions between light and matter can be dramatically modified by concentrating light into a small volume for a long period of time. Gaining control over such interaction is critical for realizing many schemes for classical and quantum information processing, including optical and quantum computing, quantum cryptography, and metrology and sensing. Plasmonic structures are capable of confining light to nanometer scales far below the diffraction limit, thereby providing a promising route for strong coupling between light and matter, as well as miniaturization of photonic circuits. At the same time, however, the performance of plasmonic circuits is limited by losses and poor collection efficiency, presenting unique challenges that need to be overcome for quantum plasmonic circuits to become a reality. In this paper, we survey recent progress in controlling emission from quantum emitters using plasmonic structures, as well as efforts to engineer surface plasmon propagation and design plasmonic circuits using these elements.", "title": "" }, { "docid": "670b35833f96a62bce9e2ddd58081fc4", "text": "Although video summarization has achieved great success in recent years, few approaches have realized the influence of video structure on the summarization results. As we know, the video data follow a hierarchical structure, i.e., a video is composed of shots, and a shot is composed of several frames. Generally, shots provide the activity-level information for people to understand the video content. While few existing summarization approaches pay attention to the shot segmentation procedure. They generate shots by some trivial strategies, such as fixed length segmentation, which may destroy the underlying hierarchical structure of video data and further reduce the quality of generated summaries. To address this problem, we propose a structure-adaptive video summarization approach that integrates shot segmentation and video summarization into a Hierarchical Structure-Adaptive RNN, denoted as HSA-RNN. We evaluate the proposed approach on four popular datasets, i.e., SumMe, TVsum, CoSum and VTW. The experimental results have demonstrated the effectiveness of HSA-RNN in the video summarization task.", "title": "" }, { "docid": "c70383b0a3adb6e697932ef4b02877ac", "text": "Betweenness centrality (BC) is a crucial graph problem that measures the significance of a vertex by the number of shortest paths leading through it. We propose Maximal Frontier Betweenness Centrality (MFBC): a succinct BC algorithm based on novel sparse matrix multiplication routines that performs a factor of p1/3 less communication on p processors than the best known alternatives, for graphs with n vertices and average degree k = n/p2/3. We formulate, implement, and prove the correctness of MFBC for weighted graphs by leveraging monoids instead of semirings, which enables a surprisingly succinct formulation. MFBC scales well for both extremely sparse and relatively dense graphs. It automatically searches a space of distributed data decompositions and sparse matrix multiplication algorithms for the most advantageous configuration. The MFBC implementation outperforms the well-known CombBLAS library by up to 8x and shows more robust performance. Our design methodology is readily extensible to other graph problems.", "title": "" }, { "docid": "1bb7c5d71db582329ad8e721fdddb0b3", "text": "The sharing economy is spreading rapidly worldwide in a number of industries and markets. The disruptive nature of this phenomenon has drawn mixed responses ranging from active conflict to adoption and assimilation. Yet, in spite of the growing attention to the sharing economy, we still do not know much about it. With the abundant enthusiasm about the benefits that the sharing economy can unleash and the weekly reminders about its dark side, further examination is required to determine the potential of the sharing economy while mitigating its undesirable side effects. The panel will join the ongoing debate about the sharing economy and contribute to the discourse with insights about how digital technologies are critical in shaping this turbulent ecosystem. Furthermore, we will define an agenda for future research on the sharing economy as it becomes part of the mainstream society as well as part of the IS research", "title": "" }, { "docid": "401ae8d7243fa09d3dd358237f0c64f9", "text": "We introduce a novel information-theoretic approach for active model selection and demonstrate its effectiveness in a real-world application. Although our method can work with arbitrary models, we focus on actively learning the appropriate structure for Gaussian process (GP) models with arbitrary observation likelihoods. We then apply this framework to rapid screening for noise-induced hearing loss (NIHL), a widespread and preventible disability, if diagnosed early. We construct a GP model for pure-tone audiometric responses of patients with NIHL. Using this and a previously published model for healthy responses, the proposed method is shown to be capable of diagnosing the presence or absence of NIHL with drastically fewer samples than existing approaches. Further, the method is extremely fast and enables the diagnosis to be performed in real time.", "title": "" }, { "docid": "32317e5403d75ccc5f2725991f281874", "text": "Background: Knowing the cultural factors existing behind health behaviors is important for improving the acceptance of services and for elevating the quality of service. Objectives: This study was conducted for the purpose of evaluating the effect of cultural characteristics on use of health care services using the “Giger and Davidhizar’s Transcultural Assessment Model”. Methods: The research is qualitative. The study group was 31 individuals who volunteered to participate in the study and living in a rural area. The snowball method was used. Data were collected in 2005. Results: Limitations/obstacles to the use of health care services were the widespread gender, use of traditional treatment methods, a high level of environmental control, and a fatalistic attitude about health. Conclusion: According to the results the most important limitation/obstacle to using health care services was being a woman.", "title": "" } ]
scidocsrr
338ad5a4a519f7de660d48c04256616a
Data-Driven Methods for Solving Algebra Word Problems
[ { "docid": "26f393df2f3e7c16db2ee10d189efb37", "text": "Recently a few systems for automatically solving math word problems have reported promising results. However, the datasets used for evaluation have limitations in both scale and diversity. In this paper, we build a large-scale dataset which is more than 9 times the size of previous ones, and contains many more problem types. Problems in the dataset are semiautomatically obtained from community question-answering (CQA) web pages. A ranking SVM model is trained to automatically extract problem answers from the answer text provided by CQA users, which significantly reduces human annotation cost. Experiments conducted on the new dataset lead to interesting and surprising results.", "title": "" }, { "docid": "12357019e2805e88b2bd47bfb331ffd7", "text": "This paper presents a deep neural solver to automatically solve math word problems. In contrast to previous statistical learning approaches, we directly translate math word problems to equation templates using a recurrent neural network (RNN) model, without sophisticated feature engineering. We further design a hybrid model that combines the RNN model and a similarity-based retrieval model to achieve additional performance improvement. Experiments conducted on a large dataset show that the RNN model and the hybrid model significantly outperform stateof-the-art statistical learning methods for math word problem solving.", "title": "" }, { "docid": "5a46d347e83aec7624dde84ecdd5302c", "text": "This paper presents a new algorithm to automatically solve algebra word problems. Our algorithm solves a word problem via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the word problem into a set of equation system templates extracted from the training data. To obtain a robust decision surface, we train a log-linear model to make the margin between the correct assignments and the false ones as large as possible. This results in a quadratic programming (QP) problem which can be efficiently solved. Experimental results show that our algorithm achieves 79.7% accuracy, about 10% higher than the state-of-the-art baseline (Kushman et al., 2014).", "title": "" } ]
[ { "docid": "b20ec220d2b027a54573b4d1338670f2", "text": "With the rapid development of economic globalization, the characteristics of supply chain such as large amount of participants, scattered geographical distribution, and long time span require the participants in supply chain to trust each other for efficient information exchange. Targeting at the pain points such as low trust degree and untimely information exchange in traditional supply chain information system, combining with the core advantages of blockchain technology, this paper proposes the concept of reorganization of supply chain information system based on blockchain technology.\n In this paper, we first review the key problems in supply chain management, analyze the key factors that weaken the resilience of the supply chain, derive the root causes of supply chain information asymmetry and the raise of supply chain risks as a whole caused by imperfections of the trust mechanism. Aimed at the above problems, the concept of reconfiguring the supply chain information system by using blockchain technology is proposed and verified by examples. Finally, by means of the conceptual model of the information platform based on the blockchain technology conceived in this paper, the specific tactics to be implemented and future challenges are clarified for the improvement of supply chain resilience.", "title": "" }, { "docid": "dbc7e759ce30307475194adb4ca37f1f", "text": "Pharyngeal arches appear in the 4th and 5th weeks of development of the human embryo. The 1st pharyngeal arch develops into the incus and malleus, premaxilla, maxilla, zygomatic bone; part of the temporal bone, the mandible and it contributes to the formation of bones of the middle ear. The musculature of the 1st pharyngeal arch includes muscles of mastication, anterior belly of the digastric mylohyoid, tensor tympani and tensor palatini. The second pharyngeal arch gives rise to the stapes, styloid process of the temporal bone, stylohyoid ligament, the lesser horn and upper part of the body of the hyoid bone. The stapedius muscle, stylohyoid, posterior belly of the digastric, auricular and muscles of facial expressional all derive from the 2nd pharyngeal arch. Otocephaly has been classified as a defect of blastogenesis, with structural defects primarily involving the first and second branchial arch derivatives. It may also result in dysmorphogenesis of other midline craniofacial field structures, such as the forebrain and axial body structures.", "title": "" }, { "docid": "af75e646afb0cf67130496397534eddc", "text": "Prior laboratory studies have shown that PhishGuru, an embedded training system, is an effective way to teach users to identify phishing scams. PhishGuru users are sent simulated phishing attacks and trained after they fall for the attacks. In this current study, we extend the PhishGuru methodology to train users about spear phishing and test it in a real world setting with employees of a Portuguese company. Our results demonstrate that the findings of PhishGuru laboratory studies do indeed hold up in a real world deployment. Specifically, the results from the field study showed that a large percentage of people who clicked on links in simulated emails proceeded to give some form of personal information to fake phishing websites, and that participants who received PhishGuru training were significantly less likely to fall for subsequent simulated phishing attacks one week later. This paper also presents some additional new findings. First, people trained with spear phishing training material did not make better decisions in identifying spear phishing emails compared to people trained with generic training material. Second, we observed that PhishGuru training could be effective in training other people in the organization who did not receive training messages directly from the system. Third, we also observed that employees in technical jobs were not different from employees with non-technical jobs in identifying phishing emails before and after the training. We conclude with some lessons that we learned in conducting the real world study.", "title": "" }, { "docid": "6cdb73baa43c26ce0184fdfb270b124f", "text": "Most video surveillance suspect investigation systems rely on the videos taken in different camera views. Actually, besides the videos, in the investigation process, investigators also manually label some marks, which, albeit incomplete, can be quite accurate and helpful in identifying persons. This paper studies the problem of Person Re-identification with Incomplete Marks (PRIM), aiming at ranking the persons in the gallery according to both the videos and incomplete marks. This problem is solved by a multi-step fusion algorithm, which consists of three key steps: (i) The early fusing step exploits both visual features and marked attributes to predict a complete and precise attribute vector. (ii) Based on the statistical attribute d ominance and saliency phenomena, a dominance-saliency matching model is suggested for measuring the distance between attribute vectors. (iii) The gallery is ranked separately by using visual features and attribute vectors, and the overall ranking list is the result of a late fusion. Experiments conducted on VIPeR dataset have validated the effectiveness of the proposed method in all the three key steps. The results also show that through introducing marks, the retrieval accuracy is significantly improved.", "title": "" }, { "docid": "1306ec9eaa39a8c12acf08567ed733b2", "text": "Energy restriction induces physiological effects that hinder further weight loss. Thus, deliberate periods of energy balance during weight loss interventions may attenuate these adaptive responses to energy restriction and thereby increase the efficiency of weight loss (i.e. the amount of weight or fat lost per unit of energy deficit). To address this possibility, we systematically searched MEDLINE, PreMEDLINE, PubMed and Cinahl and reviewed adaptive responses to energy restriction in 40 publications involving humans of any age or body mass index that had undergone a diet involving intermittent energy restriction, 12 with direct comparison to continuous energy restriction. Included publications needed to measure one or more of body weight, body mass index, or body composition before and at the end of energy restriction. 31 of the 40 publications involved 'intermittent fasting' of 1-7-day periods of severe energy restriction. While intermittent fasting appears to produce similar effects to continuous energy restriction to reduce body weight, fat mass, fat-free mass and improve glucose homeostasis, and may reduce appetite, it does not appear to attenuate other adaptive responses to energy restriction or improve weight loss efficiency, albeit most of the reviewed publications were not powered to assess these outcomes. Intermittent fasting thus represents a valid--albeit apparently not superior--option to continuous energy restriction for weight loss.", "title": "" }, { "docid": "fdaf5546d430226721aa1840f92ba5af", "text": "The recent development of regulatory policies that permit the use of TV bands spectrum on a secondary basis has motivated discussion about coexistence of primary (e.g. TV broadcasts) and secondary users (e.g. WiFi users in TV spectrum). However, much less attention has been given to coexistence of different secondary wireless technologies in the TV white spaces. Lack of coordination between secondary networks may create severe interference situations, resulting in less efficient usage of the spectrum. In this paper, we consider two of the most prominent wireless technologies available today, namely Long Term Evolution (LTE), and WiFi, and address some problems that arise from their coexistence in the same band. We perform exhaustive system simulations and observe that WiFi is hampered much more significantly than LTE in coexistence scenarios. A simple coexistence scheme that reuses the concept of almost blank subframes in LTE is proposed, and it is observed that it can improve the WiFi throughput per user up to 50 times in the studied scenarios.", "title": "" }, { "docid": "4702fceea318c326856cc2a7ae553e1f", "text": "The Institute of Medicine identified “timeliness” as one of six key “aims for improvement” in its most recent report on quality. Yet patient delays remain prevalent, resulting in dissatisfaction, adverse clinical consequences, and often, higher costs. This tutorial describes several areas in which patients routinely experience significant and potentially dangerous delays and presents operations research (OR) models that have been developed to help reduce these delays, often at little or no cost. I also describe the difficulties in developing and implementing models as well as the factors that increase the likelihood of success. Finally, I discuss the opportunities, large and small, for using OR methodologies to significantly impact practices and policies that will affect timely access to healthcare.", "title": "" }, { "docid": "28fcee5c28c2b3aae6f4761afb00ebc2", "text": "The presence of sarcasm in text can hamper the performance of sentiment analysis. The challenge is to detect the existence of sarcasm in texts. This challenge is compounded when bilingual texts are considered, for example using Malay social media data. In this paper a feature extraction process is proposed to detect sarcasm using bilingual texts; more specifically public comments on economic related posts on Facebook. Four categories of feature that can be extracted using natural language processing are considered; lexical, pragmatic, prosodic and syntactic. We also investigated the use of idiosyncratic feature to capture the peculiar and odd comments found in a text. To determine the effectiveness of the proposed process, a non-linear Support Vector Machine was used to classify texts, in terms of the identified features, according to whether they included sarcastic content or not. The results obtained demonstrate that a combination of syntactic, pragmatic and prosodic features produced the best performance with an F-measure score of 0.852.", "title": "" }, { "docid": "58e936a28863ab21c57c1b9e0dd63751", "text": "Although attention is distributed across time as well as space, the temporal allocation of attention has been less well researched than its spatial counterpart. A temporal analog of the covert spatial orientation task [Posner MI, Snyder CRR, Davidson BJ (1980) Attention and the detection of signals. J Exp Psychol Gen 109:160-174] was developed to compare the neural systems involved in directing attention to spatial locations versus time intervals. We asked whether there exists a general system for allocating attentional resources, independent of stimulus dimension, or whether functionally specialized brain regions are recruited for directing attention toward spatial versus temporal aspects of the environment. We measured brain activity in seven healthy volunteers by using positron emission tomography (PET) and in eight healthy volunteers by using functional magnetic resonance imaging (fMRI). The task manipulated cued attention to spatial locations (S) and temporal intervals (T) in a factorial design. Symbolic central cues oriented subjects toward S only (left or right), toward T only (300 msec or 1500 msec), toward both S and T simultaneously, or provided no information regarding S or T. Subjects also were scanned during a resting baseline condition. Behavioral data showed benefits and costs for performance during temporal attention similar to those established for spatial attention. Brain-imaging data revealed a partial overlap between neural systems involved in the performance of spatial versus temporal orientation of attention tasks. Additionally, hemispheric asymmetries revealed preferential right and left parietal activation for spatial and temporal attention, respectively. Parietal cortex was activated bilaterally by attending to both dimensions simultaneously. This is the first direct comparison of the neural correlates of attending to spatial versus temporal cues.", "title": "" }, { "docid": "efcd3c0ddf0b254f07bc40d4d2f71dfd", "text": "In this paper, we first provide a comprehensive investigation of four online job recommender systems (JRSs) from four different aspects: user profiling, recommendation strategies, recommendation output, and user feedback. In particular, we summarize the pros and cons of these online JRSs and highlight their differences. We then discuss the challenges in building high-quality JRSs. One main challenge lies on the design of recommendation strategies since different job applicants may have different characteristics. To address the aforementioned challenge, we develop an online JRS, iHR, which groups users into different clusters and employs different recommendation approaches for different user clusters. As a result, iHR has the capability of choosing the appropriate recommendation approaches according to users’ characteristics. Empirical results demonstrate the effectiveness of the proposed system.", "title": "" }, { "docid": "e7fc6335fc08f3c35dec43b48c4f70ca", "text": "The consumer concern on the originality of rice variety and the quality of rice leads to originality certification of rice by existing institutions. Technology helps human to perform evaluations of food grains using images of objects. This study developed a system as a tool to identify rice varieties. Identification process was performed by analyzing rice images using image processing. The analyzed features for identification consist of six color features, four morphological features, and two texture features. Classifier used LVQ neural network algorithm. Identification results using a combination of all features gave average accuracy of 70.3% with the highest classification accuracy level of 96.6% for Mentik Wangi and the lowest classification accuracy of 30% for Cilosari.", "title": "" }, { "docid": "ce2b354fee0d2d895d8af2c6642919fa", "text": "This paper presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) and data independence (unsupervised criterion). Classification accuracy is used as a metric to evaluate the performance of the method. By minimizing the structural risk, projection originated from the decision boundaries directly improves the classification performance from a supervised perspective. From an unsupervised perspective, projection can also be obtained based on maximum independence among features (or attributes) in data to indirectly achieve better classification accuracy over more intrinsic representation of the data. Orthogonality interrelates the two sets of projections such that minimum redundancy exists between the projections, leading to more effective dimensionality reduction. Experimental results show that the proposed hybrid dimensionality reduction method that satisfies both criteria simultaneously provides higher classification performance, especially for noisy data sets, in relatively lower dimensional space than various existing methods.", "title": "" }, { "docid": "27bf126da661051da506926f7d9de632", "text": "In this paper, we propose an novel implementation of a simultaneous localization and mapping (SLAM) system based on a monocular camera from an unmanned aerial vehicle (UAV) using Depth prediction performed with Capsule Networks (CapsNet), which possess improvements over the drawbacks of the more widely-used Convolutional Neural Networks (CNN). An Extended Kalman Filter will assist in estimating the position of the UAV so that we are able to update the belief for the environment. Results will be evaluated on a benchmark dataset to portray the accuracy of our intended approach.", "title": "" }, { "docid": "e37276916d5f8682b104448489efbfc6", "text": "With the spurt in usage of smart devices, large amounts of unstructured text is generated by numerous social media tools. This text is often filled with stylistic or linguistic variations making the text analytics using traditional machine learning tools to be less effective. One of the specific problem in Indian context is to deal with large number of languages used by social media users in their roman form. As part of FIRE-2015 shared task on mixed script information retrieval, we address the problem of word level language identification. Our approach consists of a two stage algorithm for language identification. First level classification is done using sentence level character n-grams and second level consists of word level character n-grams based classifier. This approach effectively captures the linguistic mode of author in social texting enviroment. The overall weighted F-Score for the run submitted to FIRE Shared task is 0.7692. The sentence level classification algorithm which is used in achiving this result has an accuracy of 0.6887. We could further improve the accuracy of sentence level classifier further by 1.6% using additional social media text crawled from other sources. Naive Bayes classifier showed largest improvement (5.5%) in accuracy level by the addition of supplementary tuples. We also observed that using semi-supervised learning algorithm such as Expectation Maximization with Naive Bayes, the accuracy could be improved to 0.7977.", "title": "" }, { "docid": "85bc241c03d417099aa155766e6a1421", "text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.", "title": "" }, { "docid": "8f957dab2aa6b186b61bc309f3f2b5c3", "text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.", "title": "" }, { "docid": "9a27c676b5d356d5feb91850e975a336", "text": "Joseph Goldstein has written in this journal that creation (through invention) and revelation (through discovery) are two different routes to advancement in the biomedical sciences1. In my work as a phytochemist, particularly during the period from the late 1960s to the 1980s, I have been fortunate enough to travel both routes. I graduated from the Beijing Medical University School of Pharmacy in 1955. Since then, I have been involved in research on Chinese herbal medicine in the China Academy of Chinese Medical Sciences (previously known as the Academy of Traditional Chinese Medicine). From 1959 to 1962, I was released from work to participate in a training course in Chinese medicine that was especially designed for professionals with backgrounds in Western medicine. The 2.5-year training guided me to the wonderful treasure to be found in Chinese medicine and toward understanding the beauty in the philosophical thinking that underlies a holistic view of human beings and the universe.", "title": "" }, { "docid": "835e07466611989c2a4979a5e087156f", "text": "AIM\nInternet gaming disorder (IGD) is a serious disorder leading to and maintaining pertinent personal and social impairment. IGD has to be considered in view of heterogeneous and incomplete concepts. We therefore reviewed the scientific literature on IGD to provide an overview focusing on definitions, symptoms, prevalence, and aetiology.\n\n\nMETHOD\nWe systematically reviewed the databases ERIC, PsyARTICLES, PsycINFO, PSYNDEX, and PubMed for the period January 1991 to August 2016, and additionally identified secondary references.\n\n\nRESULTS\nThe proposed definition in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition provides a good starting point for diagnosing IGD but entails some disadvantages. Developing IGD requires several interacting internal factors such as deficient self, mood and reward regulation, problems of decision-making, and external factors such as deficient family background and social skills. In addition, specific game-related factors may promote IGD. Summarizing aetiological knowledge, we suggest an integrated model of IGD elucidating the interplay of internal and external factors.\n\n\nINTERPRETATION\nSo far, the concept of IGD and the pathways leading to it are not entirely clear. In particular, long-term follow-up studies are missing. IGD should be understood as an endangering disorder with a complex psychosocial background.\n\n\nWHAT THIS PAPER ADDS\nIn representative samples of children and adolescents, on average, 2% are affected by Internet gaming disorder (IGD). The mean prevalences (overall, clinical samples included) reach 5.5%. Definitions are heterogeneous and the relationship with substance-related addictions is inconsistent. Many aetiological factors are related to the development and maintenance of IGD. This review presents an integrated model of IGD, delineating the interplay of these factors.", "title": "" }, { "docid": "bfd879313c1dbb641798f1c8b56248d2", "text": "In this paper, we propose an attention-aware deep reinforcement learning (ADRL) method for video face recognition, which aims to discard the misleading and confounding frames and find the focuses of attentions in face videos for person recognition. We formulate the process of finding the attentions of videos as a Markov decision process and train the attention model through a deep reinforcement learning framework without using extra labels. Unlike existing attention models, our method takes information from both the image space and the feature space as the input to make better use of face information that is discarded in the feature learning process. Besides, our approach is attention-aware, which seeks different attentions of videos for the recognition of different pairs of videos. Our approach achieves very competitive video face recognition performance on three widely used video face datasets.", "title": "" }, { "docid": "f1d4323cbabd294723a2fd68321ad640", "text": "Mycosis fungoides (MF), a low-grade lymphoproliferative disorder, is the most common type of cutaneous T-cell lymphoma. Typically, neoplastic T cells localize to the skin and produce patches, plaques, tumours or erythroderma. Diagnosis of MF can be difficult due to highly variable presentations and the sometimes nonspecific nature of histological findings. Molecular biology has improved the diagnostic accuracy. Nevertheless, clinical experience is of substantial importance as MF can resemble a wide variety of skin diseases. We performed a literature review and found that MF can mimic >50 different clinical entities. We present a structured framework of clinical variations of classical, unusual and distinct forms of MF. Distinct subforms such as ichthyotic MF, adnexotropic (including syringotropic and folliculotropic) MF, MF with follicular mucinosis, granulomatous MF with granulomatous slack skin and papuloerythroderma of Ofuji are delineated in more detail.", "title": "" } ]
scidocsrr
84790d91d8203ad05ae357fd02c89496
DETECTING LASER SPOT IN SHOOTING SIMULATOR USING AN EMBEDDED CAMERA
[ { "docid": "16880162165f4c95d6b01dc4cfc40543", "text": "In this paper we present CMUcam3, a low-cost, open source, em bedded computer vision platform. The CMUcam3 is the third generation o f the CMUcam system and is designed to provide a flexible and easy to use ope n source development environment along with a more powerful hardware platfo rm. The goal of the system is to provide simple vision capabilities to small emb dded systems in the form of an intelligent sensor that is supported by an open sou rce community. The hardware platform consists of a color CMOS camera, a frame bu ff r, a low cost 32bit ARM7TDMI microcontroller, and an MMC memory card slot. T he CMUcam3 also includes 4 servo ports, enabling one to create entire, w orking robots using the CMUcam3 board as the only requisite robot processor. Cus tom C code can be developed using an optimized GNU toolchain and executabl es can be flashed onto the board using a serial port without external download ing hardware. The development platform includes a virtual camera target allowi ng for rapid application development exclusively on a PC. The software environment c omes with numerous open source example applications and libraries includi ng JPEG compression, frame differencing, color tracking, convolutions, histog ramming, edge detection, servo control, connected component analysis, FAT file syste m upport, and a face detector.", "title": "" }, { "docid": "4acbb4e7de6daec331c8ff8672fa7447", "text": "This paper describes a machine vision system with back lighting illumination and friendly man-machine interface. Subtraction is used to segment target holes quickly and accurately. The oval obtained after tracing boundary is processed by Generalized Hough Transform to acquire the target's center. Marked-hole's area, perimeter and moment invariants are extracted as cluster features. The auto-scoring software, programmed by Visual C++, has successfully solved the recognition of off-target and overlapped holes through alarming surveillance and bullet tacking programs. The experimental results show that, when the target is distorted obviously, the system can recognize the overlapped holes on real time and also clusters random shape holes on the target correctly. The high accuracy, fast computing speed, easy debugging and low cost make the system can be widely used.", "title": "" } ]
[ { "docid": "77a7e6233e41ce9fc8d1db2e85ee0563", "text": "We show how an ensemble of Q-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the Q-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.", "title": "" }, { "docid": "c6a519ce49dc7b5776afe8035f79fc73", "text": "For 100 years, there has been no change in the basic structure of the electrical power grid. Experiences have shown that the hierarchical, centrally controlled grid of the 20th Century is ill-suited to the needs of the 21st Century. To address the challenges of the existing power grid, the new concept of smart grid has emerged. The smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability, and so on. While current power systems are based on a solid information and communication infrastructure, the new smart grid needs a different and much more complex one, as its dimension is much larger. This paper addresses critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues and opportunities. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as to discuss the still-open research issues in this field. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area.", "title": "" }, { "docid": "2f7990443281ed98189abb65a23b0838", "text": "In recent years, there has been a tendency to correlate the origin of modern culture and language with that of anatomically modern humans. Here we discuss this correlation in the light of results provided by our first hand analysis of ancient and recently discovered relevant archaeological and paleontological material from Africa and Europe. We focus in particular on the evolutionary significance of lithic and bone technology, the emergence of symbolism, Neandertal behavioral patterns, the identification of early mortuary practices, the anatomical evidence for the acquisition of language, the", "title": "" }, { "docid": "db34e0317dc78ac7cfedb66619f9d300", "text": "Most research efforts on image classification so far have been focused on medium-scale datasets, which are often defined as datasets that can fit into the memory of a desktop (typically 4G∼48G). There are two main reasons for the limited effort on large-scale image classification. First, until the emergence of ImageNet dataset, there was almost no publicly available large-scale benchmark data for image classification. This is mostly because class labels are expensive to obtain. Second, large-scale classification is hard because it poses more challenges than its medium-scale counterparts. A key challenge is how to achieve efficiency in both feature extraction and classifier training without compromising performance. This paper is to show how we address this challenge using ImageNet dataset as an example. For feature extraction, we develop a Hadoop scheme that performs feature extraction in parallel using hundreds of mappers. This allows us to extract fairly sophisticated features (with dimensions being hundreds of thousands) on 1.2 million images within one day. For SVM training, we develop a parallel averaging stochastic gradient descent (ASGD) algorithm for training one-against-all 1000-class SVM classifiers. The ASGD algorithm is capable of dealing with terabytes of training data and converges very fast–typically 5 epochs are sufficient. As a result, we achieve state-of-the-art performance on the ImageNet 1000-class classification, i.e., 52.9% in classification accuracy and 71.8% in top 5 hit rate.", "title": "" }, { "docid": "e672d12d5e0163fae74639ca0384a131", "text": "The greater sophistication and complexity of machines increases the necessity to equip them with human friendly interfaces. As we know, voice is the main support for human-human communication, so it is desirable to interact with machines, namely robots, using voice. In this paper we present the recent evolution of the Natural Language Understanding capabilities of Carl, our mobile intelligent robot capable of interacting with humans using spoken natural language. The new design is based on a hybrid approach, combining a robust parser with Memory Based Learning. This hybrid architecture is capable of performing deep analysis if the sentence is (almost) completely accepted by the grammar, and capable of performing a shallow analysis if the sentence has severe errors.", "title": "" }, { "docid": "1acea5d872937a8929a174916f53303d", "text": "The pattern of muscle glycogen synthesis following glycogen-depleting exercise occurs in two phases. Initially, there is a period of rapid synthesis of muscle glycogen that does not require the presence of insulin and lasts about 30-60 minutes. This rapid phase of muscle glycogen synthesis is characterised by an exercise-induced translocation of glucose transporter carrier protein-4 to the cell surface, leading to an increased permeability of the muscle membrane to glucose. Following this rapid phase of glycogen synthesis, muscle glycogen synthesis occurs at a much slower rate and this phase can last for several hours. Both muscle contraction and insulin have been shown to increase the activity of glycogen synthase, the rate-limiting enzyme in glycogen synthesis. Furthermore, it has been shown that muscle glycogen concentration is a potent regulator of glycogen synthase. Low muscle glycogen concentrations following exercise are associated with an increased rate of glucose transport and an increased capacity to convert glucose into glycogen. The highest muscle glycogen synthesis rates have been reported when large amounts of carbohydrate (1.0-1.85 g/kg/h) are consumed immediately post-exercise and at 15-60 minute intervals thereafter, for up to 5 hours post-exercise. When carbohydrate ingestion is delayed by several hours, this may lead to ~50% lower rates of muscle glycogen synthesis. The addition of certain amino acids and/or proteins to a carbohydrate supplement can increase muscle glycogen synthesis rates, most probably because of an enhanced insulin response. However, when carbohydrate intake is high (> or =1.2 g/kg/h) and provided at regular intervals, a further increase in insulin concentrations by additional supplementation of protein and/or amino acids does not further increase the rate of muscle glycogen synthesis. Thus, when carbohydrate intake is insufficient (<1.2 g/kg/h), the addition of certain amino acids and/or proteins may be beneficial for muscle glycogen synthesis. Furthermore, ingestion of insulinotropic protein and/or amino acid mixtures might stimulate post-exercise net muscle protein anabolism. Suggestions have been made that carbohydrate availability is the main limiting factor for glycogen synthesis. A large part of the ingested glucose that enters the bloodstream appears to be extracted by tissues other than the exercise muscle (i.e. liver, other muscle groups or fat tissue) and may therefore limit the amount of glucose available to maximise muscle glycogen synthesis rates. Furthermore, intestinal glucose absorption may also be a rate-limiting factor for muscle glycogen synthesis when large quantities (>1 g/min) of glucose are ingested following exercise.", "title": "" }, { "docid": "6f70b6071c945ca22edda9e2b8fe22a8", "text": "BACKGROUND\nHyaluronidase (Hylase Dessau(®)) is a hyaluronic acid-metabolizing enzyme, which has been shown to loosen the extracellular matrix, thereby improving the diffusion of local anesthetics. Lower eyelid edema is a common post-interventional complication of cosmetic procedures performed in the lid region, such as the injection of hyaluronic acid fillers for tear-trough augmentation. The purpose of this study was to validate the efficacy of hyaluronidase in the management of lower eyelid edema.\n\n\nMETHODS\nWe performed a retrospective analysis with 20 patients with lower eyelid edema. Most patients (n = 14) presented with edema following hyaluronic acid injection (tear-trough augmentation), whereas the minority (n = 6) were treated due to idiopathic edema (malar edema or malar mounds). Patients were treated by local infiltration of approximately 0.2 ml to 0.5 ml of hyaluronidase (Hylase Dessau(®) 20 IU to 75 IU) per eyelid. Photographs were taken prior to and seven days after infiltration.\n\n\nRESULTS\nHyaluronidase was found to reduce effectively and rapidly or resolve eyelid edema after a single injection. No relevant adverse effects were observed. However, it must be noted that a hyaluronidase injection may also dissolve injected hyaluronic acid fillers and may therefore negatively affect tear-trough augmentations. While the effects of a treatment for edema due to tear-trough augmentation were permanent, malar edema and malar mounds reoccurred within two to three weeks.\n\n\nCONCLUSION\nThe infiltration of hyaluronidase is rapid, safe and currently the only effective option for the management of eyelid edema. No relevant adverse effects were observed.", "title": "" }, { "docid": "7209596ad58da21211bfe0ceaaccc72b", "text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.", "title": "" }, { "docid": "acfe7531f67a40e27390575a69dcd165", "text": "This paper reviews the relationship between attention deficit hyperactivity disorder (ADHD) and academic performance. First, the relationship at different developmental stages is examined, focusing on pre-schoolers, children, adolescents and adults. Second, the review examines the factors underpinning the relationship between ADHD and academic underperformance: the literature suggests that it is the symptoms of ADHD and underlying cognitive deficits not co-morbid conduct problems that are at the root of academic impairment. The review concludes with an overview of the literature examining strategies that are directed towards remediating the academic impairment of individuals with ADHD.", "title": "" }, { "docid": "ebb01a778c668ef7b439875eaa5682ac", "text": "In this paper, we present a large scale off-line handwritten Chinese character database-HCL2000 which will be made public available for the research community. The database contains 3,755 frequently used simplified Chinesecharacters written by 1,000 different subjects. The writers’ information is incorporated in the database to facilitate testing on grouping writers with different background such as age, occupation, gender, and education etc. We investigate some characteristics of writing styles from different groups of writers. We evaluate HCL2000 database using three different algorithms as a baseline. We decide to publish the database along with this paper and make it free for a research purpose.", "title": "" }, { "docid": "0d7ce42011c48232189c791e71c289f5", "text": "RECENT WORK in virtue ethics, particularly sustained reflection on specific virtues, makes it possible to argue that the classical list of cardinal virtues (prudence, justice, temperance, and fortitude) is inadequate, and that we need to articulate the cardinal virtues more correctly. With that end in view, the first section of this article describes the challenges of espousing cardinal virtues today, the second considers the inadequacy of the classical listing of cardinal virtues, and the third makes a proposal. Since virtues, no matter how general, should always relate to concrete living, the article is framed by a case.", "title": "" }, { "docid": "c6d25017a6cba404922933672a18d08a", "text": "The Internet of Things (IoT) makes smart objects the ultimate building blocks in the development of cyber-physical smart pervasive frameworks. The IoT has a variety of application domains, including health care. The IoT revolution is redesigning modern health care with promising technological, economic, and social prospects. This paper surveys advances in IoT-based health care technologies and reviews the state-of-the-art network architectures/platforms, applications, and industrial trends in IoT-based health care solutions. In addition, this paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attack taxonomies from the health care perspective. Further, this paper proposes an intelligent collaborative security model to minimize security risk; discusses how different innovations such as big data, ambient intelligence, and wearables can be leveraged in a health care context; addresses various IoT and eHealth policies and regulations across the world to determine how they can facilitate economies and societies in terms of sustainable development; and provides some avenues for future research on IoT-based health care based on a set of open issues and challenges.", "title": "" }, { "docid": "e9402a771cc761e7e6484c2be6bc2cce", "text": "In this work, we present the Text Conditioned Auxiliary Classifier Generative Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network (GAN) for synthesizing images from their text descriptions. Former approaches have tried to condition the generative process on the textual data; but allying it to the usage of class information, known to diversify the generated samples and improve their structural coherence, has not been explored. We trained the presented TAC-GAN model on the Oxford102 dataset of flowers, and evaluated the discriminability of the generated images with Inception-Score, as well as their diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our approach outperforms the stateof-the-art models, i.e., its inception score is 3.45, corresponding to a relative increase of 7.8% compared to the recently introduced StackGan. A comparison of the mean MS-SSIM scores of the training and generated samples per class shows that our approach is able to generate highly diverse images with an average MS-SSIM of 0.14 over all generated classes.", "title": "" }, { "docid": "c8cd0c0ebd38b3e287d6e6eed965db6b", "text": "Goalball, one of the official Paralympic events, is popular with visually impaired people all over the world. The purpose of goalball is to throw the specialized ball, with bells inside it, to the goal line of the opponents as many times as possible while defenders try to block the thrown ball with their bodies. Since goalball players cannot rely on visual information, they need to grasp the game situation using their auditory sense. However, it is hard, especially for beginners, to perceive the direction and distance of the thrown ball. In addition, they generally tend to be afraid of the approaching ball because, without visual information, they could be hit by a high-speed ball. In this paper, our goal is to develop an application called GoalBaural (Goalball + aural) that enables goalball players to improve the recognizability of the direction and distance of a thrown ball without going onto the court and playing goalball. The evaluation result indicated that our application would be efficient in improving the speed and the accuracy of locating the balls.", "title": "" }, { "docid": "78921cbdbc80f714598d8fb9ae750c7e", "text": "Duplicates in data management are common and problematic. In this work, we present a translation of Datalog under bag semantics into a well-behaved extension of Datalog, the so-called warded Datalog±, under set semantics. From a theoretical point of view, this allows us to reason on bag semantics by making use of the well-established theoretical foundations of set semantics. From a practical point of view, this allows us to handle the bag semantics of Datalog by powerful, existing query engines for the required extension of Datalog. This use of Datalog± is extended to give a set semantics to duplicates in Datalog± itself. We investigate the properties of the resulting Datalog± programs, the problem of deciding multiplicities, and expressibility of some bag operations. Moreover, the proposed translation has the potential for interesting applications such as to Multiset Relational Algebra and the semantic web query language SPARQL with bag semantics. 2012 ACM Subject Classification Information systems → Query languages; Theory of computation → Logic; Theory of computation → Semantics and reasoning", "title": "" }, { "docid": "d2a04795fa95d2534b000dbf211cd4b9", "text": "Tracking multiple targets is a challenging problem, especially when the targets are “identical”, in the sense that the same model is used to describe each target. In this case, simply instantiating several independent 1-body trackers is not an adequate solution, because the independent trackers tend to coalesce onto the best-fitting target. This paper presents an observation density for tracking which solves this problem by exhibiting a probabilistic exclusion principle. Exclusion arises naturally from a systematic derivation of the observation density, without relying on heuristics. Another important contribution of the paper is the presentation of partitioned sampling, a new sampling method for multiple object tracking. Partitioned sampling avoids the high computational load associated with fully coupled trackers, while retaining the desirable properties of coupling.", "title": "" }, { "docid": "e1404d2926f51455690883caf01fb2f9", "text": "The integration of data produced and collected across autonomous, heterogeneous web services is an increasingly important and challenging problem. Due to the lack of global identifiers, the same entity (e.g., a product) might have different textual representations across databases. Textual data is also often noisy because of transcription errors, incomplete information, and lack of standard formats. A fundamental task during data integration is matching of strings that refer to the same entity. In this paper, we adopt the widely used and established cosine similarity metric from the information retrieval field in order to identify potential string matches across web sources. We then use this similarity metric to characterize this key aspect of data integration as a join between relations on textual attributes, where the similarity of matches exceeds a specified threshold. Computing an exact answer to the text join can be expensive. For query processing efficiency, we propose a sampling-based join approximation strategy for execution in a standard, unmodified relational database management system (RDBMS), since more and more web sites are powered by RDBMSs with a web-based front end. We implement the join inside an RDBMS, using SQL queries, for scalability and robustness reasons. Finally, we present a detailed performance evaluation of an implementation of our algorithm within a commercial RDBMS, using real-life data sets. Our experimental results demonstrate the efficiency and accuracy of our techniques.", "title": "" }, { "docid": "fdefbb2ed3185eadb4657879d9776d34", "text": "Convenient monitoring of vital signs, particularly blood pressure(BP), is critical to improve the effectiveness of health-care and prevent chronic diseases. This study presents a user-friendly, low-cost, real-time, and non-contact technique for BP measurement based on the detection of photoplethysmography (PPG) using a regular webcam. Leveraging features extracted from photoplethysmograph, an individual's BP can be estimated using a neural network. Experiments were performed on 20 human participants during three different daytime slots given the influence of background illumination. Compared against the systolic blood pressure and diastolic blood pressure readings collected from a commercially available BP monitor, the proposed technique achieves an average error rate of 9.62% (Systolic BP) and 11.63% (Diastolic BP) for the afternoon session, and 8.4% (Systolic BP) and 11.18% (Diastolic BP) for the evening session. The proposed technique can be easily extended to the camera on any mobile device and thus be widely used in a pervasive manner.", "title": "" }, { "docid": "bda980d41e0b64ec7ec41502cada6e7f", "text": "In this paper, we address semantic parsing in a multilingual context. We train one multilingual model that is capable of parsing natural language sentences from multiple different languages into their corresponding formal semantic representations. We extend an existing sequence-to-tree model to a multi-task learning framework which shares the decoder for generating semantic representations. We report evaluation results on the multilingual GeoQuery corpus and introduce a new multilingual version of the ATIS corpus.", "title": "" }, { "docid": "f38854d7c788815d8bc6d20db284e238", "text": "This paper presents the development of a Sinhala Speech Recognition System to be deployed in an Interactive Voice Response (IVR) system of a telecommunication service provider. The main objectives are to recognize Sinhala digits and names of Sinhala songs to be set up as ringback tones. Sinhala being a phonetic language, its features are studied to develop a list of 47 phonemes. A continuous speech recognition system is developed based on Hidden Markov Model (HMM). The acoustic model is trained using the voice through mobile phone. The outcome is a speaker independent speech recognition system which is capable of recognizing 10 digits and 50 Sinhala songs. A word error rate (WER) of 11.2% using a speech corpus of 0.862 hours and a sentence error rate (SER) of 5.7% using a speech corpus of 1.388 hours are achieved for digits and songs respectively.", "title": "" } ]
scidocsrr
b43f02c9229dc9058cf2a8ac499dca7b
Strategic Object Oriented Reinforcement Learning
[ { "docid": "5a40dc82635b3e9905b652da114eb3f4", "text": "Deep reinforcement learning (DRL) brings the power of deep neural networks to bear on the generic task of trial-and-error learning, and its effectiveness has been convincingly demonstrated on tasks such as Atari video games and the game of Go. However, contemporary DRL systems inherit a number of shortcomings from the current generation of deep learning techniques. For example, they require very large datasets to work effectively, entailing that they are slow to learn even when such datasets are available. Moreover, they lack the ability to reason on an abstract level, which makes it difficult to implement high-level cognitive functions such as transfer learning, analogical reasoning, and hypothesis-based reasoning. Finally, their operation is largely opaque to humans, rendering them unsuitable for domains in which verifiability is important. In this paper, we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings. As proof-ofconcept, we present a preliminary implementation of the architecture and apply it to several variants of a simple video game. We show that the resulting system – though just a prototype – learns effectively, and, by acquiring a set of symbolic rules that are easily comprehensible to humans, dramatically outperforms a conventional, fully neural DRL system on a stochastic variant of the game.", "title": "" }, { "docid": "79f1473d4eb0c456660543fda3a648f1", "text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.", "title": "" }, { "docid": "5bcb714e7badbffdb52d7673bbcd3839", "text": "Several algorithms for learning near-optimal policies in Markov Decision Processes have been analyzed and proven efficient. Empirical results have suggested that Model-based Interval Estimation (MBIE) learns efficiently in practice, effectively balancing exploration and exploitation. This paper presents a theoretical analysis of MBIE and a new variation called MBIE-EB, proving their efficiency even under worst-case conditions. The paper also introduces a new performance metric, average loss, and relates it to its less “online” cousins from the literature.", "title": "" } ]
[ { "docid": "71da47c6837022a80dccabb0a1f5c00e", "text": "The treatment of obesity and cardiovascular diseases is one of the most difficult and important challenges nowadays. Weight loss is frequently offered as a therapy and is aimed at improving some of the components of the metabolic syndrome. Among various diets, ketogenic diets, which are very low in carbohydrates and usually high in fats and/or proteins, have gained in popularity. Results regarding the impact of such diets on cardiovascular risk factors are controversial, both in animals and humans, but some improvements notably in obesity and type 2 diabetes have been described. Unfortunately, these effects seem to be limited in time. Moreover, these diets are not totally safe and can be associated with some adverse events. Notably, in rodents, development of nonalcoholic fatty liver disease (NAFLD) and insulin resistance have been described. The aim of this review is to discuss the role of ketogenic diets on different cardiovascular risk factors in both animals and humans based on available evidence.", "title": "" }, { "docid": "dd14599e6a4d2e83a7a476471be53d13", "text": "This paper presents the modeling, design, fabrication, and measurement of microelectromechanical systems-enabled continuously tunable evanescent-mode electromagnetic cavity resonators and filters with very high unloaded quality factors (Qu). Integrated electrostatically actuated thin diaphragms are used, for the first time, for tuning the frequency of the resonators/filters. An example tunable resonator with 2.6:1 (5.0-1.9 GHz) tuning ratio and Qu of 300-650 is presented. A continuously tunable two-pole filter from 3.04 to 4.71 GHz with 0.7% bandwidth and insertion loss of 3.55-2.38 dB is also shown as a technology demonstrator. Mechanical stability measurements show that the tunable resonators/filters exhibit very low frequency drift (less than 0.5% for 3 h) under constant bias voltage. This paper significantly expands upon previously reported tunable resonators.", "title": "" }, { "docid": "8ed4354b483241e7046f042af2f94680", "text": "There is a growing literature on managing multitasking and interruptions in the workplace. In an ethnographic study, we investigated the phenomenon of communication chains, the occurrence of interactions in quick succession. Focusing on chains enable us to better understand the role of communication in multitasking. Our results reveal that chains are prevalent in information workers, and that attributes such as the number of links, and the rate of media and organizational switching can be predicted from the first catalyzing link of the chain. When chains are triggered by external interruptions, they have more links, a trend for more media switches and more organizational switches. We also found that more switching of organizational contexts in communication is associated with higher levels of stress. We describe the role of communication chains as performing alignment in multitasking and discuss the implications of our results.", "title": "" }, { "docid": "8d45138ec69bb4ee47efa088c03d7a42", "text": "Precision medicine is at the forefront of biomedical research. Cancer registries provide rich perspectives and electronic health records (EHRs) are commonly utilized to gather additional clinical data elements needed for translational research. However, manual annotation is resource-intense and not readily scalable. Informatics-based phenotyping presents an ideal solution, but perspectives obtained can be impacted by both data source and algorithm selection. We derived breast cancer (BC) receptor status phenotypes from structured and unstructured EHR data using rule-based algorithms, including natural language processing (NLP). Overall, the use of NLP increased BC receptor status coverage by 39.2% from 69.1% with structured medication information alone. Using all available EHR data, estrogen receptor-positive BC cases were ascertained with high precision (P = 0.976) and recall (R = 0.987) compared with gold standard chart-reviewed patients. However, status negation (R = 0.591) decreased 40.2% when relying on structured medications alone. Using multiple EHR data types (and thorough understanding of the perspectives offered) are necessary to derive robust EHR-based precision medicine phenotypes.", "title": "" }, { "docid": "77e6593b3078a5d8b23fcb282f90596b", "text": "A graph database is a database where the data structures for the schema and/or instances are modeled as a (labeled)(directed) graph or generalizations of it, and where querying is expressed by graphoriented operations and type constructors. In this article we present the basic notions of graph databases, give an historical overview of its main development, and study the main current systems that implement them.", "title": "" }, { "docid": "163d7e9a00649b3a6036507f6a725af8", "text": "In the last decades, a lot of 3D face recognition techniques have been proposed. They can be divided into three parts, holistic matching techniques, feature-based techniques and hybrid techniques. In this paper, a hybrid technique is used, where, a prototype of a new hybrid face recognition technique depends on 3D face scan images are designed, simulated and implemented. Some geometric rules are used for analyzing and mapping the face. Image processing is used to get the twodimensional values of predetermined and specific facial points, software programming is used to perform a three-dimensional coordinates of the predetermined points and to calculate several geometric parameter ratios and relations. Neural network technique is used for processing the calculated geometric parameters and then performing facial recognition. The new design is not affected by variant pose, illumination and expression and has high accurate level compared with the 2D analysis. Moreover, the proposed algorithm is of higher performance than latest’s published biometric recognition algorithms in terms of cost, confidentiality of results, and availability of design tools.", "title": "" }, { "docid": "abcad2d522600ffc1c2fb81617296a5d", "text": "Text miningconcerns applying data mining techniques to unstructured text.Information extraction(IE) is a form of shallow text understanding that locates specific pieces of data in natural language documents, transforming unstructured text into a structured database. This paper describes a system called DISCOTEX, that combines IE and data mining methodologies to perform text mining as well as improve the performance of the underlying extraction system. Rules mined from a database extracted from a corpus of texts are used to predict additional information to extract from future documents, thereby improving the recall of IE. Encouraging results are presented on applying these techniques to a corpus of computer job announcement postings from an Internet newsgroup.", "title": "" }, { "docid": "8aafa283b228bbaa7ff3e37e7ca0a861", "text": "In order to meet the continuously increasing demands for high throughput in wireless networks, IEEE 802 LAN/MAN Standard Committee is developing IEEE 802.11ax: a new amendment for the Wi-Fi standard. This amendment provides various ways to improve the efficiency of Wi-Fi. The most revolutionary one is OFDMA. Apart from obvious advantages, such as decreasing overhead for short packet transmission at high rates and improving robustness to frequency selective interference, being used for uplink transmission, OFDMA can increase power spectral density and, consequently, user data rates. However, the gain of OFDMA mainly depends on the resource scheduling between users. The peculiarities of OFDMA implementation in Wi-Fi completely change properties of classic schedulers used in other OFDMA systems, e.g. LTE. In the paper, we consider the usage of OFDMA in Wi-Fi for uplink transmission. We study peculiarities of OFDMA in Wi-Fi, adapt classic schedulers to Wi-Fi, explaining why they do not perform well. Finally we develop a novel scheduler, MUTAX, and evaluate its performance with simulation.", "title": "" }, { "docid": "b61985ecdb51982e6e31b19c862f18e2", "text": "Autonomous indoor navigation of Micro Aerial Vehicles (MAVs) possesses many challenges. One main reason is because GPS has limited precision in indoor environments. The additional fact that MAVs are not able to carry heavy weight or power consuming sensors, such as range finders, makes indoor autonomous navigation a challenging task. In this paper, we propose a practical system in which a quadcopter autonomously navigates indoors and finds a specific target, i.e. a book bag, by using a single camera. A deep learning model, Convolutional Neural Network (ConvNet), is used to learn a controller strategy that mimics an expert pilot’s choice of action. We show our system’s performance through real-time experiments in diverse indoor locations. To understand more about our trained network, we use several visualization techniques.", "title": "" }, { "docid": "45ec4615b6cc593011eb9a7b714fb325", "text": "There has been a drive recently to make sensor data accessible on the Web. However, because of the vast number of sensors collecting data about our environment, finding relevant sensors on the Web is a non-trivial challenge. In this paper, we present an approach to discovering sensors through a standard service interface over Linked Data. This is accomplished with a semantic sensor network middleware that includes a sensor registry on Linked Data and a sensor discovery service that extends the OGC Sensor Web Enablement. With this approach, we are able to access and discover sensors that are positioned near named-locations of interest.", "title": "" }, { "docid": "a398f3f5b670a9d2c9ae8ad84a4a3cb8", "text": "This project deals with online simultaneous localization and mapping (SLAM) problem without taking any assistance from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). The main aim of this project is to perform online odometry and mapping in real time using a 2-axis lidar mounted on a robot. This involves use of two algorithms, the first of which runs at a higher frequency and uses the collected data to estimate velocity of the lidar which is fed to the second algorithm, a scan registration and mapping algorithm, to perform accurate matching of point cloud data.", "title": "" }, { "docid": "3ba011d181a4644c8667b139c63f50ff", "text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.", "title": "" }, { "docid": "e791e6dc74aeb92fad4d4c5421c1fef7", "text": "MPI has been widely used in High Performance Computing. In contrast, such efficient communication support is lacking in the field of Big Data Computing, where communication is realized by time consuming techniques such as HTTP/RPC. This paper takes a step in bridging these two fields by extending MPI to support Hadoop-like Big Data Computing jobs, where processing and communication of a large number of key-value pair instances are needed through distributed computation models such as MapReduce, Iteration, and Streaming. We abstract the characteristics of key-value communication patterns into a bipartite communication model, which reveals four distinctions from MPI: Dichotomic, Dynamic, Data-centric, and Diversified features. Utilizing this model, we propose the specification of a minimalistic extension to MPI. An open source communication library, DataMPI, is developed to implement this specification. Performance experiments show that DataMPI has significant advantages in performance and flexibility, while maintaining high productivity, scalability, and fault tolerance of Hadoop.", "title": "" }, { "docid": "e5495c2cce9405e6111e887a436a98d5", "text": "OBJECTIVE\nTo evaluate the use of noninvasive procedures for the detection of myocardial ischemia and its relation with other coexistent clinical factors in patients with asymptomatic type 2 diabetes mellitus.\n\n\nSUBJECTS AND METHODS\nA total of 42 patients with type 2 diabetes mellitus, aged 41-72 years with no clinical history suggestive of coronary heart disease, were evaluated for silent myocardial ischemia by stress cardiac exercise tolerance test (ETT), 12-lead electrocardiography (ECG), transthoracic echocardiography and stress myocardial perfusion scan using technetium-99m tetrofosmin.\n\n\nRESULTS\nEleven patients (26.2%) showed an ischemic pattern on ETT, the resting ECG was suggestive of ischemia in only 2 (4.8%), echocardiography showed diastolic dysfunction in 9 (21.4%), and the stress myocardial perfusion scan was ischemic in 3 (7.3%). For subjects over the age of 57, a significant difference was found between age and ischemic ETT (p = 0.026) and diastolic dysfunction by echocardiography (p = 0.044). Patients with microalbuminuria and/or diastolic dysfunction were more likely than others to have ischemic ETT (p = 0.036 and 0.024, respectively) and patients with diastolic dysfunction had a higher prevalence of ischemic ETT. There was no relation between ischemic ETT and other major cardiac risk factors (hypertension, dyslipidemia, smoking, sex, duration of diabetes, BMI, and glycated hemoglobin levels).\n\n\nCONCLUSION\nThe cardiac ETT was most helpful for detecting myocardial ischemia in asymptomic type 2 diabetics. For equivocal ETT findings, echocardiography is recommended. The prevalence of myocardial ischemia was high in patients with type 2 diabetes mellitus.", "title": "" }, { "docid": "eb72c4bfa65b25785b9a23ca9cd56cc0", "text": "The cortical anatomy of the conscious resting state (REST) was investigated using a meta-analysis of nine positron emission tomography (PET) activation protocols that dealt with different cognitive tasks but shared REST as a common control state. During REST, subjects were in darkness and silence, and were instructed to relax, refrain from moving, and avoid systematic thoughts. Each protocol contrasted REST to a different cognitive task consisting either of language, mental imagery, mental calculation, reasoning, finger movement, or spatial working memory, using either auditory, visual or no stimulus delivery, and requiring either vocal, motor or no output. A total of 63 subjects and 370 spatially normalized PET scans were entered in the meta-analysis. Conjunction analysis revealed a network of brain areas jointly activated during conscious REST as compared to the nine cognitive tasks, including the bilateral angular gyrus, the left anterior precuneus and posterior cingulate cortex, the left medial frontal and anterior cingulate cortex, the left superior and medial frontal sulcus, and the left inferior frontal cortex. These results suggest that brain activity during conscious REST is sustained by a large scale network of heteromodal associative parietal and frontal cortical areas, that can be further hierarchically organized in an episodic working memory parieto-frontal network, driven in part by emotions, working under the supervision of an executive left prefrontal network.", "title": "" }, { "docid": "80b964bf911044932fc6a17518f902d0", "text": "In this paper, a rotating contactless power transfer system achieving 200W for space application is presented. Both transformer and electronics designs are presented. A proof-of-concept prototype, making use of components with space grade equivalent has been build and tested.", "title": "" }, { "docid": "8e3eec62b02a9cf7a56803775757925f", "text": "Emotional states of individuals, also known as moods, are central to the expression of thoughts, ideas and opinions, and in turn impact attitudes and behavior. As social media tools are increasingly used by individuals to broadcast their day-to-day happenings, or to report on an external event of interest, understanding the rich ‘landscape’ of moods will help us better interpret and make sense of the behavior of millions of individuals. Motivated by literature in psychology, we study a popular representation of human mood landscape, known as the ‘circumplex model’ that characterizes affective experience through two dimensions: valence and activation. We identify more than 200 moods frequent on Twitter, through mechanical turk studies and psychology literature sources, and report on four aspects of mood expression: the relationship between (1) moods and usage levels, including linguistic diversity of shared content (2) moods and the social ties individuals form, (3) moods and amount of network activity of individuals, and (4) moods and participatory patterns of individuals such as link sharing and conversational engagement. Our results provide at-scale naturalistic assessments and extensions of existing conceptualizations of human mood in social media contexts.", "title": "" }, { "docid": "e35933fd7f6a108e2473cc6a0e9d1182", "text": "Web usage mining is a main research area in Web mining focused on learning about Web users and their interactions with Web sites. Main challenges in Web usage mining are the application of data mining techniques to Web data in an efficient way and the discovery of non trivial user behaviour patterns. In this paper we focus the attention on search engines analyzing query log data and showing several models about how users search and how users use search engine results.", "title": "" }, { "docid": "16c522d458ed5df9d620e8255886e69e", "text": "Linked Stream Data has emerged as an effort to represent dynamic, time-dependent data streams following the principles of Linked Data. Given the increasing number of available stream data sources like sensors and social network services, Linked Stream Data allows an easy and seamless integration, not only among heterogenous stream data, but also between streams and Linked Data collections, enabling a new range of real-time applications. This tutorial gives an overview about Linked Stream Data processing. It describes the basic requirements for the processing, highlighting the challenges that are faced, such as managing the temporal aspects and memory overflow. It presents the different architectures for Linked Stream Data processing engines, their advantages and disadvantages. The tutorial also reviews the state of the art Linked Stream Data processing systems, and provide a comparison among them regarding the design choices and overall performance. A short discussion of the current challenges in open problems is given at the end.", "title": "" } ]
scidocsrr
31676b77fc40d569e619caec0dd4fc17
A Pan-Cancer Proteogenomic Atlas of PI3K/AKT/mTOR Pathway Alterations.
[ { "docid": "99ff0acb6d1468936ae1620bc26c205f", "text": "The Cancer Genome Atlas (TCGA) has used the latest sequencing and analysis methods to identify somatic variants across thousands of tumours. Here we present data and analytical results for point mutations and small insertions/deletions from 3,281 tumours across 12 tumour types as part of the TCGA Pan-Cancer effort. We illustrate the distributions of mutation frequencies, types and contexts across tumour types, and establish their links to tissues of origin, environmental/carcinogen influences, and DNA repair defects. Using the integrated data sets, we identified 127 significantly mutated genes from well-known (for example, mitogen-activated protein kinase, phosphatidylinositol-3-OH kinase, Wnt/β-catenin and receptor tyrosine kinase signalling pathways, and cell cycle control) and emerging (for example, histone, histone modification, splicing, metabolism and proteolysis) cellular processes in cancer. The average number of mutations in these significantly mutated genes varies across tumour types; most tumours have two to six, indicating that the number of driver mutations required during oncogenesis is relatively small. Mutations in transcriptional factors/regulators show tissue specificity, whereas histone modifiers are often mutated across several cancer types. Clinical association analysis identifies genes having a significant effect on survival, and investigations of mutations with respect to clonal/subclonal architecture delineate their temporal orders during tumorigenesis. Taken together, these results lay the groundwork for developing new diagnostics and individualizing cancer treatment.", "title": "" } ]
[ { "docid": "6d00686ad4d2d589a415d810b2fcc876", "text": "The accuracy of voice activity detection (VAD) is one of the most important factors which influence the capability of the speech recognition system, how to detect the endpoint precisely in noise environment is still a difficult task. In this paper, we proposed a new VAD method based on Mel-frequency cepstral coefficients (MFCC) similarity. We first extracts the MFCC of a voice signal for each frame, followed by calculating the MFCC Euclidean distance and MFCC correlation coefficient of the test frame and the background noise, Finally, give the experimental results. The results show that at low SNR circumstance, MFCC similarity detection method is better than traditional short-term energy method. Compared with Euclidean distance measure method, correlation coefficient is better.", "title": "" }, { "docid": "070d23b78d7808a19bde68f0ccdd7587", "text": "Deep learning is playing a more and more important role in our daily life and scientific research such as autonomous systems, intelligent life and data mining. However, numerous studies have showed that deep learning with superior performance on many tasks may suffer from subtle perturbations constructed by attacker purposely, called adversarial perturbations, which are imperceptible to human observers but completely effect deep neural network models. The emergence of adversarial attacks has led to questions about neural networks. Therefore, machine learning security and privacy are becoming an increasingly active research area. In this paper, we summarize the prevalent methods for the generating adversarial attacks according to three groups. We elaborated on their ideas and principles of generation. We further analyze the common limitations of these methods and implement statistical experiments of the last layer output on CleverHans to reveal that the detection of adversarial samples is not as difficult as it seems and can be achieved in some relatively simple manners.", "title": "" }, { "docid": "e21aed852a892cbede0a31ad84d50a65", "text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.09.010 ⇑ Corresponding author. Tel.: +1 662 915 5519. E-mail addresses: crego@bus.olemiss.edu (C. R (D. Gamboa), fred.glover@colorado.edu (F. Glover), colin.j.osterman@navy.mil (C. Osterman). Heuristics for the traveling salesman problem (TSP) have made remarkable advances in recent years. We survey the leading methods and the special components responsible for their successful implementations, together with an experimental analysis of computational tests on a challenging and diverse set of symmetric and asymmetric TSP benchmark problems. The foremost algorithms are represented by two families, deriving from the Lin–Kernighan (LK) method and the stem-and-cycle (S&C) method. We show how these families can be conveniently viewed within a common ejection chain framework which sheds light on their similarities and differences, and gives clues about the nature of potential enhancements to today’s best methods that may provide additional gains in solving large and difficult TSPs. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "47de26ecd5f759afa7361c7eff9e9b25", "text": "At many teaching hospitals, it is common practice for on-call radiology residents to interpret radiology examinations; such reports are later reviewed and revised by an attending physician before being used for any decision making. In case there are substantial problems in the resident’s initial report, the resident is called and the problems are reviewed to prevent similar future reporting errors. However, due to the large volume of reports produced, attending physicians rarely discuss the problems side by side with residents, thus missing an educational opportunity. In this work, we introduce a pipeline to discriminate between reports with significant discrepancies and those with non-significant discrepancies. The former contain severe errors or mis-interpretations, thus representing a great learning opportunity for the resident; the latter presents only minor differences (often stylistic) and have a minor role in the education of a resident. By discriminating between the two, the proposed system could flag those reports that an attending radiology should definitely review with residents under their supervision. We evaluated our approach on 350 manually annotated radiology reports sampled from a collection of tens of thousands. The proposed classifier achieves an Area Under the Curve (AUC) of 0.837, which represent a 14% improvement over the baselines. Furthermore, the classifier reduces the False Negative Rate (FNR) by 52%, a desirable performance metric for any recall-oriented task such as the one studied", "title": "" }, { "docid": "48485e967c5aa345a53b91b47cc0e6d0", "text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.", "title": "" }, { "docid": "d7f743ddff9863b046ab91304b37a667", "text": "In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramér-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization.", "title": "" }, { "docid": "8a37001733b0ee384277526bd864fe04", "text": "Miscreants use DDoS botnets to attack a victim via a large number of malware-infected hosts, combining the bandwidth of the individual PCs. Such botnets have thus a high potential to render targeted services unavailable. However, the actual impact of attacks by DDoS botnets has never been evaluated. In this paper, we monitor C&C servers of 14 DirtJumper and Yoddos botnets and record the DDoS targets of these networks. We then aim to evaluate the availability of the DDoS victims, using a variety of measurements such as TCP response times and analyzing the HTTP content. We show that more than 65% of the victims are severely affected by the DDoS attacks, while also a few DDoS attacks likely failed.", "title": "" }, { "docid": "7c1c7eb4f011ace0734dd52759ce077f", "text": "OBJECTIVES\nTo investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke.\n\n\nDESIGN\nA randomized controlled trial.\n\n\nSETTING\nOccupational therapy clinics in medical centers.\n\n\nSUBJECTS\nThirty-one subacute stroke patients were recruited.\n\n\nINTERVENTIONS\nParticipants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device.\n\n\nMAIN MEASURES\nMotor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale.\n\n\nRESULTS\nThe primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group.\n\n\nCONCLUSION\nBilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.", "title": "" }, { "docid": "52c0c6d1deacdca44df5000b2b437c78", "text": "This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.", "title": "" }, { "docid": "2e987add43a584bdd0a67800ad28c5f8", "text": "The bones of elderly people with osteoporosis are susceptible to either traumatic fracture as a result of external impact, such as what happens during a fall, or even spontaneous fracture without trauma as a result of muscle contraction [1, 2]. Understanding the fracture behavior of bone tissue will help researchers find proper treatments to strengthen the bone in order to prevent such fractures, and design better implants to reduce the chance of secondary fracture after receiving the implant.", "title": "" }, { "docid": "863db7439c2117e36cc2a789b557a665", "text": "A core brain network has been proposed to underlie a number of different processes, including remembering, prospection, navigation, and theory of mind [Buckner, R. L., & Carroll, D. C. Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57, 2007]. This purported network—medial prefrontal, medial-temporal, and medial and lateral parietal regions—is similar to that observed during default-mode processing and has been argued to represent self-projection [Buckner, R. L., & Carroll, D. C. Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57, 2007] or scene-construction [Hassabis, D., & Maguire, E. A. Deconstructing episodic memory with construction. Trends in Cognitive Sciences, 11, 299–306, 2007]. To date, no systematic and quantitative demonstration of evidence for this common network has been presented. Using the activation likelihood estimation (ALE) approach, we conducted four separate quantitative meta-analyses of neuroimaging studies on: (a) autobiographical memory, (b) navigation, (c) theory of mind, and (d) default mode. A conjunction analysis between these domains demonstrated a high degree of correspondence. We compared these findings to a separate ALE analysis of prospection studies and found additional correspondence. Across all domains, and consistent with the proposed network, correspondence was found within the medial-temporal lobe, precuneus, posterior cingulate, retrosplenial cortex, and the temporo-parietal junction. Additionally, this study revealed that the core network extends to lateral prefrontal and occipital cortices. Autobiographical memory, prospection, theory of mind, and default mode demonstrated further reliable involvement of the medial prefrontal cortex and lateral temporal cortices. Autobiographical memory and theory of mind, previously studied as distinct, exhibited extensive functional overlap. These findings represent quantitative evidence for a core network underlying a variety of cognitive domains.", "title": "" }, { "docid": "566412870c83e5e44fabc50487b9d994", "text": "The influence of technology in the field of gambling innovation continues to grow at a rapid pace. After a brief overview of gambling technologies and deregulation issues, this review examines the impact of technology on gambling by highlighting salient factors in the rise of Internet gambling (i.e., accessibility, affordability, anonymity, convenience, escape immersion/dissociation, disinhibition, event frequency, asociability, interactivity, and simulation). The paper also examines other factors in relation to Internet gambling including the relationship between Internet addiction and Internet gambling addiction. The paper ends by overviewing some of the social issues surrounding Internet gambling (i.e., protection of the vulnerable, Internet gambling in the workplace, electronic cash, and unscrupulous operators). Recommendations for Internet gambling operators are also provided.", "title": "" }, { "docid": "28574c82a49b096b11f1b78b5d62e425", "text": "A major reason for the current reproducibility crisis in the life sciences is the poor implementation of quality control measures and reporting standards. Improvement is needed, especially regarding increasingly complex in vitro methods. Good Cell Culture Practice (GCCP) was an effort from 1996 to 2005 to develop such minimum quality standards also applicable in academia. This paper summarizes recent key developments in in vitro cell culture and addresses the issues resulting for GCCP, e.g. the development of induced pluripotent stem cells (iPSCs) and gene-edited cells. It further deals with human stem-cell-derived models and bioengineering of organo-typic cell cultures, including organoids, organ-on-chip and human-on-chip approaches. Commercial vendors and cell banks have made human primary cells more widely available over the last decade, increasing their use, but also requiring specific guidance as to GCCP. The characterization of cell culture systems including high-content imaging and high-throughput measurement technologies increasingly combined with more complex cell and tissue cultures represent a further challenge for GCCP. The increasing use of gene editing techniques to generate and modify in vitro culture models also requires discussion of its impact on GCCP. International (often varying) legislations and market forces originating from the commercialization of cell and tissue products and technologies are further impacting on the need for the use of GCCP. This report summarizes the recommendations of the second of two workshops, held in Germany in December 2015, aiming map the challenge and organize the process or developing a revised GCCP 2.0.", "title": "" }, { "docid": "59c2e1dcf41843d859287124cc655b05", "text": "Atherosclerotic cardiovascular disease (ASCVD) is the most common cause of death in most Western countries. Nutrition factors contribute importantly to this high risk for ASCVD. Favourable alterations in diet can reduce six of the nine major risk factors for ASCVD, i.e. high serum LDL-cholesterol levels, high fasting serum triacylglycerol levels, low HDL-cholesterol levels, hypertension, diabetes and obesity. Wholegrain foods may be one the healthiest choices individuals can make to lower the risk for ASCVD. Epidemiological studies indicate that individuals with higher levels (in the highest quintile) of whole-grain intake have a 29 % lower risk for ASCVD than individuals with lower levels (lowest quintile) of whole-grain intake. It is of interest that neither the highest levels of cereal fibre nor the highest levels of refined cereals provide appreciable protection against ASCVD. Generous intake of whole grains also provides protection from development of diabetes and obesity. Diets rich in wholegrain foods tend to decrease serum LDL-cholesterol and triacylglycerol levels as well as blood pressure while increasing serum HDL-cholesterol levels. Whole-grain intake may also favourably alter antioxidant status, serum homocysteine levels, vascular reactivity and the inflammatory state. Whole-grain components that appear to make major contributions to these protective effects are: dietary fibre; vitamins; minerals; antioxidants; phytosterols; other phytochemicals. Three servings of whole grains daily are recommended to provide these health benefits.", "title": "" }, { "docid": "66370e97fba315711708b13e0a1c9600", "text": "Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.", "title": "" }, { "docid": "a2f65eb4a81bc44ea810d834ab33d891", "text": "This survey provides the basis for developing research in the area of mobile manipulator performance measurement, an area that has relatively few research articles when compared to other mobile manipulator research areas. The survey provides a literature review of mobile manipulator research with examples of experimental applications. The survey also provides an extensive list of planning and control references as this has been the major research focus for mobile manipulators which factors into performance measurement of the system. The survey then reviews performance metrics considered for mobile robots, robot arms, and mobile manipulators and the systems that measure their performance, including machine tool measurement systems through dynamic motion tracking systems. Lastly, the survey includes a section on research that has occurred for performance measurement of robots, mobile robots, and mobile manipulators beginning with calibration, standards, and mobile manipulator artifacts that are being considered for evaluation of mobile manipulator performance.", "title": "" }, { "docid": "d56807574d6185c6e3cd9a8e277f8006", "text": "There is a substantial literature on e-government that discusses information and communication technology (ICT) as an instrument for reducing the role of bureaucracy in government organizations. The purpose of this paper is to offer a critical discussion of this literature and to provide a complementary argument, which favors the use of ICT in the public sector to support the operations of bureaucratic organizations. Based on the findings of a case study – of the Venice municipality in Italy – the paper discusses how ICT can be used to support rather than eliminate bureaucracy. Using the concepts of e-bureaucracy and functional simplification and closure, the paper proposes evidence and support for the argument that bureaucracy should be preserved and enhanced where e-government policies are concerned. Functional simplification and closure are very valuable concepts for explaining why this should be a viable approach.", "title": "" }, { "docid": "77bbeb9510f4c9000291910bf06e4a22", "text": "Traveling Salesman Problem is an important optimization issue of many fields such as transportation, logistics and semiconductor industries and it is about finding a Hamiltonian path with minimum cost. To solve this problem, many researchers have proposed different approaches including metaheuristic methods. Artificial Bee Colony algorithm is a well known swarm based optimization technique. In this paper we propose a new Artificial Bee Colony algorithm called Combinatorial ABC for Traveling Salesman Problem. Simulation results show that this Artificial Bee Colony algorithm can be used for combinatorial optimization problems.", "title": "" }, { "docid": "dbafea1fbab901ff5a53f752f3bfb4b8", "text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.", "title": "" }, { "docid": "de761c4e3e79b5b4d056552e0a71a7b6", "text": "A novel multiple-input multiple-output (MIMO) dielectric resonator antenna (DRA) for long term evolution (LTE) femtocell base stations is described. The proposed antenna is able to transmit and receive information independently using TE and HE modes in the LTE bands 12 (698-716 MHz, 728-746 MHz) and 17 (704-716 MHz, 734-746 MHz). A systematic design method based on perturbation theory is proposed to induce mode degeneration for MIMO operation. Through perturbing the boundary of the DRA, the amount of energy stored by a specific mode is changed as well as the resonant frequency of that mode. Hence, by introducing an adequate boundary perturbation, the TE and HE modes of the DRA will resonate at the same frequency and share a common impedance bandwidth. The simulated mutual coupling between the modes was as low as - 40 dB . It was estimated that in a rich scattering environment with an Signal-to-Noise Ratio (SNR) of 20 dB per receiver branch, the proposed MIMO DRA was able to achieve a channel capacity of 11.1 b/s/Hz (as compared to theoretical maximum 2 × 2 capacity of 13.4 b/s/Hz). Our experimental measurements successfully demonstrated the design methodology proposed in this work.", "title": "" } ]
scidocsrr
2465d29191b3ef50436fd60e65b42940
A new rail inspection method based on deep learning using laser cameras
[ { "docid": "e560cd7561d4f518cdab6bd1f5441de8", "text": "Rail inspection is a very important task in railway maintenance, and it is periodically needed for preventing dangerous situations. Inspection is operated manually by trained human operator walking along the track searching for visual anomalies. This monitoring is unacceptable for slowness and lack of objectivity, as the results are related to the ability of the observer to recognize critical situations. The correspondence presents a patent-pending real-time Visual Inspection System for Railway (VISyR) maintenance, and describes how presence/absence of the fastening bolts that fix the rails to the sleepers is automatically detected. VISyR acquires images from a digital line-scan camera. Data are simultaneously preprocessed according to two discrete wavelet transforms, and then provided to two multilayer perceptron neural classifiers (MLPNCs). The \"cross validation\" of these MLPNCs avoids (practically-at-all) false positives, and reveals the presence/absence of the fastening bolts with an accuracy of 99.6% in detecting visible bolts and of 95% in detecting missing bolts. A field-programmable gate array-based architecture performs these tasks in 8.09 mus, allowing an on-the-fly analysis of a video sequence acquired at 200 km/h", "title": "" } ]
[ { "docid": "db75809bcc029a4105dc12c63e2eca76", "text": "Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal ‘fingerprint’ of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.", "title": "" }, { "docid": "d319a17ad2fa46e0278e0b0f51832f4b", "text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.", "title": "" }, { "docid": "51b1f69c4bdc5fd034f482ad9ffa4549", "text": "The synapse is the focus of experimental research and theory on the cellular mechanisms of nervous system plasticity and learning, but recent research is expanding the consideration of plasticity into new mechanisms beyond the synapse, notably including the possibility that conduction velocity could be modifiable through changes in myelin to optimize the timing of information transmission through neural circuits. This concept emerges from a confluence of brain imaging that reveals changes in white matter in the human brain during learning, together with cellular studies showing that the process of myelination can be influenced by action potential firing in axons. This Opinion article summarizes the new research on activity-dependent myelination, explores the possible implications of these studies and outlines the potential for new research.", "title": "" }, { "docid": "dfa890a87b2e5ac80f61c793c8bca791", "text": "Reinforcement learning (RL) algorithms have traditionally been thought of as trial and error learning methods that use actual control experience to incrementally improve a control policy. Sutton's DYNA architecture demonstrated that RL algorithms can work as well using simulated experience from an environment model, and that the resulting computation was similar to doing one-step lookahead planning. Inspired by the literature on hierarchical planning, I propose learning a hierarchy of models of the environment that abstract temporal detail as a means of improving the scalability of RL algorithms. I present H-DYNA (Hierarchical DYNA), an extension to Sutton's DYNA architecture that is able to learn such a hierarchy of abstract models. H-DYNA di ers from hierarchical planners in two ways: rst, the abstract models are learned using experience gained while learning to solve other tasks in the same environment, and second, the abstract models can be used to solve stochastic control tasks. Simulations on a set of compositionally-structured navigation tasks show that H-DYNA can learn to solve them faster than conventional RL algorithms. The abstract models also serve as mechanisms for achieving transfer of learning across multiple tasks.", "title": "" }, { "docid": "c429bf418a4ecbd56c7b2ab6f4ca3cd6", "text": "The Internet exhibits a gigantic measure of helpful data which is generally designed for its users, which makes it hard to extract applicable information from different sources. Accordingly, the accessibility of strong, adaptable Information Extraction framework that consequently concentrate structured data such as, entities, relationships between entities, and attributes from unstructured or semi-structured sources. But somewhere during extraction of information may lead to the loss of its meaning, which is absolutely not feasible. Semantic Web adds solution to this problem. It is about providing meaning to the data and allow the machine to understand and recognize these augmented data more accurately. The proposed system is about extracting information from research data of IT domain like journals of IEEE, Springer, etc., which aid researchers and the organizations to get the data of journals in an optimized manner so the time and hard work of surfing and reading the entire journal's papers or articles reduces. Also the accuracy of the system is taken care of using RDF, the data extracted has a specific declarative semantics so that the meaning of the research papers or articles during extraction remains unchanged. In addition, the same approach shall be applied on multiple documents, so that time factor can get saved.", "title": "" }, { "docid": "16e2f269c21eaf2bf856bb0996ab8135", "text": "In this paper, we present a cryptographic technique for an authenticated, end-to-end verifiable and secret ballot election. Voters should receive assurance that their vote is cast as intended, recorded as cast and tallied as recorded. The election system as a whole should ensure that voter coercion is unlikely, even when voters are willing to be influenced. Currently, almost all verifiable e-voting systems require trusted authorities to perform the tallying process. An exception is the DRE-i and DRE-ip system. The DRE-ip system removes the requirement of tallying authorities by encrypting ballot in such a way that the election tally can be publicly verified without decrypting cast ballots. However, the DRE-ip system necessitates a secure bulletin board (BB) for storing the encrypted ballot as without it the integrity of the system may be lost and the result can be compromised without detection during the audit phase. In this paper, we have modified the DRE-ip system so that if any recorded ballot is tampered by an adversary before the tallying phase, it will be detected during the tallying phase. In addition, we have described a method using zero knowledge based public blockchain to store these ballots so that it remains tamper proof. To the best of our knowledge, it is the first end-toend verifiable Direct-recording electronic (DRE) based e-voting system using blockchain. In our case, we assume that the bulletin board is insecure and an adversary has read and write access to the bulletin board. We have also added a secure biometric with government provided identity card based authentication mechanism for voter authentication. The proposed system is able to encrypt ballot in such a way that the election tally can be publicly verified without decrypting cast ballots maintaining end-to-end verifiability and without requiring the secure bulletin board.", "title": "" }, { "docid": "202dc8823d3d16bc26653727ac1ef67f", "text": "Near-sensor data analytics is a promising direction for internet-of-things endpoints, as it minimizes energy spent on communication and reduces network load - but it also poses security concerns, as valuable data are stored or sent over the network at various stages of the analytics pipeline. Using encryption to protect sensitive data at the boundary of the on-chip analytics engine is a way to address data security issues. To cope with the combined workload of analytics and encryption in a tight power envelope, we propose Fulmine, a system-on-chip (SoC) based on a tightly-coupled multi-core cluster augmented with specialized blocks for compute-intensive data processing and encryption functions, supporting software programmability for regular computing tasks. The Fulmine SoC, fabricated in 65-nm technology, consumes less than 20mW on average at 0.8V achieving an efficiency of up to 70pJ/B in encryption, 50pJ/px in convolution, or up to 25MIPS/mW in software. As a strong argument for real-life flexible application of our platform, we show experimental results for three secure analytics use cases: secure autonomous aerial surveillance with a state-of-the-art deep convolutional neural network (CNN) consuming 3.16pJ per equivalent reduced instruction set computer operation, local CNN-based face detection with secured remote recognition in 5.74pJ/op, and seizure detection with encrypted data collection from electroencephalogram within 12.7pJ/op.", "title": "" }, { "docid": "9f68df51d0d47b539a6c42207536d012", "text": "Schizophrenia-spectrum risk alleles may persist in the population, despite their reproductive costs in individuals with schizophrenia, through the possible creativity benefits of mild schizotypy in non-psychotic relatives. To assess this creativity-benefit model, we measured creativity (using 6 verbal and 8 drawing tasks), schizotypy, Big Five personality traits, and general intelligence in 225 University of New Mexico students. Multiple regression analyses showed that openness and intelligence, but not schizotypy, predicted reliable observer ratings of verbal and drawing creativity. Thus, the 'madness-creativity' link seems mediated by the personality trait of openness, and standard creativity-benefit models seem unlikely to explain schizophrenia's evolutionary persistence.", "title": "" }, { "docid": "e51f4a7eb2e933057f18a625a6e926ff", "text": "In this paper an integrated wide-band transition from a differential micro-strip line to a rectangular WR-15 waveguide is presented. The transition makes use of a cavity that is entirely integrated into the multilayer printed circuit board (PCB), which offers three layers (RF signal layer, ground plane and DC signal layer) for signal routing. The transition including the 18 mm long micro-strip feed lines provides a bandwidth of 20 GHz from 50 GHz to 70 GHz and an insertion loss of less than 2.3 dB. This makes the transition perfectly suited for differential wide-band transceivers operating in the 60 GHz band.", "title": "" }, { "docid": "767da6eef531b3dc54d6600e9d238ffa", "text": "This review paper focuses on the neonatal brain segmentation algorithms in the literature. It provides an overview of clinical magnetic resonance imaging (MRI) of the newborn brain and the challenges in automated tissue classification of neonatal brain MRI. It presents a complete survey of the existing segmentation methods and their salient features. The different approaches are categorized into intracranial and brain tissue segmentation algorithms based on their level of tissue classification. Further, the brain tissue segmentation techniques are grouped based on their atlas usage into atlas-based, augmented atlas-based and atlas-free methods. In addition, the research gaps and lacunae in literature are also identified.", "title": "" }, { "docid": "b89099e9b01a83368a1ebdb2f4394eba", "text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.", "title": "" }, { "docid": "1b78650b979b0043eeb3e7478a263846", "text": "Our solutions was launched using a want to function as a full on-line digital local library that gives use of many PDF guide catalog. You may find many different types of e-guide as well as other literatures from my papers data bank. Specific popular topics that spread out on our catalog are famous books, answer key, assessment test questions and answer, guideline paper, training guideline, quiz test, consumer guide, consumer guidance, service instructions, restoration handbook, and so forth.", "title": "" }, { "docid": "1a5c009f059ea28fd2d692d1de4eb913", "text": "We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD parallelly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.", "title": "" }, { "docid": "69c8584255b16e6bc05fdfc6510d0dc4", "text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.", "title": "" }, { "docid": "865d7b8fae1cab739570229889177d58", "text": "This paper presents design and implementation of scalar control of induction motor. This method leads to be able to adjust the speed of the motor by control the frequency and amplitude of the stator voltage of induction motor, the ratio of stator voltage to frequency should be kept constant, which is called as V/F or scalar control of induction motor drive. This paper presents a comparative study of open loop and close loop V/F control induction motor. The V/F", "title": "" }, { "docid": "4f9df22aa072503e23384f62d4b5acdb", "text": "Convolutional neural networks are designed for dense data, but vision data is often sparse (stereo depth, point clouds, pen stroke, etc.). We present a method to handle sparse depth data with optional dense RGB, and accomplish depth completion and semantic segmentation changing only the last layer. Our proposal efficiently learns sparse features without the need of an additional validity mask. We show how to ensure network robustness to varying input sparsities. Our method even works with densities as low as 0.8% (8 layer lidar), and outperforms all published state-of-the-art on the Kitti depth completion benchmark.", "title": "" }, { "docid": "03cd6ef0cc0dab9f33b88dd7ae4227c2", "text": "The dopaminergic system plays a pivotal role in the central nervous system via its five diverse receptors (D1–D5). Dysfunction of dopaminergic system is implicated in many neuropsychological diseases, including attention deficit hyperactivity disorder (ADHD), a common mental disorder that prevalent in childhood. Understanding the relationship of five different dopamine (DA) receptors with ADHD will help us to elucidate different roles of these receptors and to develop therapeutic approaches of ADHD. This review summarized the ongoing research of DA receptor genes in ADHD pathogenesis and gathered the past published data with meta-analysis and revealed the high risk of DRD5, DRD2, and DRD4 polymorphisms in ADHD.", "title": "" }, { "docid": "3223d52743a64bc599488cdde8ef177b", "text": "The resolution of a comparator is determined by the dc input offset and the ac noise. For mixed-mode applications with significant digital switching, input-referred supply noise can be a significant source of error. This paper proposes an offset compensation technique that can simultaneously minimize input-referred supply noise. Demonstrated with digital offset compensation, this scheme reduces input-referred supply noise to a small fraction (13%) of one least significant bit (LSB) digital offset. In addition, the same analysis can be applied to analog offset compensation.", "title": "" }, { "docid": "97ac64bb4d06216253eacb17abfcb103", "text": "UIMA Ruta is a rule-based system designed for information extraction tasks, but it is also applicable for many natural language processing use cases. This demonstration gives an overview of the UIMA Ruta Workbench, which provides a development environment and tooling for the rule language. It was developed to ease every step in engineering rule-based applications. In addition to the full-featured rule editor, the user is supported by explanation of the rule execution, introspection in results, automatic validation and rule induction. Furthermore, the demonstration covers the usage and combination of arbitrary components for natural language processing.", "title": "" } ]
scidocsrr
6323ee41481aa633455b839b29dd1eea
A Binning Scheme for Fast Hard Drive Based Image Search
[ { "docid": "7eec1e737523dc3b78de135fc71b058f", "text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches", "title": "" }, { "docid": "3982c66e695fdefe36d8d143247add88", "text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "title": "" } ]
[ { "docid": "f67e221a12e0d8ebb531a1e7c80ff2ff", "text": "Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: 1) relying on object or part annotations which are heavily labor consuming; and 2) ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification and the main novelties are: 1) object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotion; and 2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Compared with more than ten state-of-the-art methods on four widely-used datasets, our OPAM approach achieves the best performance.", "title": "" }, { "docid": "e051c1dafe2a2f45c48a79c320894795", "text": "In this paper we present a graph-based model that, utilizing relations between groups of System-calls, detects whether an unknown software sample is malicious or benign, and classifies a malicious software to one of a set of known malware families. More precisely, we utilize the System-call Dependency Graphs (or, for short, ScD-graphs), obtained by traces captured through dynamic taint analysis. We design our model to be resistant against strong mutations applying our detection and classification techniques on a weighted directed graph, namely Group Relation Graph, or Gr-graph for short, resulting from ScD-graph after grouping disjoint subsets of its vertices. For the detection process, we propose the $$\\Delta $$ Δ -similarity metric, and for the process of classification, we propose the SaMe-similarity and NP-similarity metrics consisting the SaMe-NP similarity. Finally, we evaluate our model for malware detection and classification showing its potentials against malicious software measuring its detection rates and classification accuracy.", "title": "" }, { "docid": "fb70de7ed3e42c37b130686bfa3aee47", "text": "Data from vehicles instrumented with GPS or other localization technologies are increasingly becoming widely available due to the investments in Connected and Automated Vehicles (CAVs) and the prevalence of personal mobile devices such as smartphones. Tracking or trajectory data from these probe vehicles are already being used in practice for travel time or speed estimation and for monitoring network conditions. However, there has been limited work on extracting other critical traffic flow variables, in particular density and flow, from probe data. This paper presents a microscopic approach (akin to car-following) for inferring the number of unobserved vehicles in between a set of probe vehicles in the traffic stream. In particular, we develop algorithms to extract and exploit the somewhat regular patterns in the trajectories when the probe vehicles travel through stop-and-go waves in congested traffic. Using certain critical points of trajectories as the input, the number of unobserved vehicles between consecutive probes are then estimated through a Naïve Bayes model. The parameters needed for the Naïve Bayes include means and standard deviations for the probability density functions (pdfs) for the distance headways between vehicles. These parameters are estimated through supervised as well as unsupervised learning methods. The proposed ideas are tested based on the trajectory data collected from US 101 and I-80 in California for the FHWA's NGSIM (next generation simulation) project. Under the dense traffic conditions analyzed, the results show that the number of unobserved vehicles between two probes can be predicted with an accuracy of ±1 vehicle almost always.", "title": "" }, { "docid": "9fc6244b3d0301a8486d44d58cf95537", "text": "The aim of this paper is to explore some, ways of linking ethnographic studies of work in context with the design of CSCW systems. It uses examples from an interdisciplinary collaborative project on air traffic control. Ethnographic methods are introduced, and applied to identifying the social organization of this cooperative work, and the use of instruments within it. On this basis some metaphors for the electronic representation of current manual practices are presented, and their possibilities and limitations are discussed.", "title": "" }, { "docid": "b5967a8dc6a8349b2f5c1d3070369d3c", "text": "Hereditary xerocytosis is thought to be a rare genetic condition characterized by red blood cell (RBC) dehydration with mild hemolysis. RBC dehydration is linked to reduced Plasmodium infection in vitro; however, the role of RBC dehydration in protection against malaria in vivo is unknown. Most cases of hereditary xerocytosis are associated with gain-of-function mutations in PIEZO1, a mechanically activated ion channel. We engineered a mouse model of hereditary xerocytosis and show that Plasmodium infection fails to cause experimental cerebral malaria in these mice due to the action of Piezo1 in RBCs and in T cells. Remarkably, we identified a novel human gain-of-function PIEZO1 allele, E756del, present in a third of the African population. RBCs from individuals carrying this allele are dehydrated and display reduced Plasmodium infection in vitro. The existence of a gain-of-function PIEZO1 at such high frequencies is surprising and suggests an association with malaria resistance.", "title": "" }, { "docid": "f86b052520e3950a2b580323252dbfde", "text": "In this paper, novel radial basis function-neural network (RBF-NN) models are presented for the efficient filling of the coupling matrix of the method of moments (MoM). Two RBF-NNs are trained to calculate the majority of elements in the coupling matrix. The rest of elements are calculated using the conventional MoM, hence the technique is referred to as neural network-method of moments (NN-MoM). The proposed NN-MoM is applied to the analysis of a number of microstrip patch antenna arrays. The results show that NN-MoM is both accurate and fast. The proposed technique is general and it is convenient to integrate with MoM planar solvers.", "title": "" }, { "docid": "88ff3300dafab6b87d770549a1dc4f0e", "text": "Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. This paper elaborates on the problem of constrained novelty search and proposes two novelty search algorithms which search within both the feasible and the infeasible space. Inspired by the FI-2pop genetic algorithm, both algorithms maintain and evolve two separate populations, one with feasible and one with infeasible individuals, while each population can use its own selection method. The proposed algorithms are applied to the problem of generating diverse but playable game levels, which is representative of the larger problem of procedural game content generation. Results show that the two-population constrained novelty search methods can create, under certain conditions, larger and more diverse sets of feasible game levels than current methods of novelty search, whether constrained or unconstrained. However, the best algorithm is contingent on the particularities of the search space and the genetic operators used. Additionally, the proposed enhancement of offspring boosting is shown to enhance performance in all cases of two-population novelty search.", "title": "" }, { "docid": "15ce175cc7aa263ded19c0ef344d9a61", "text": "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-ofthe-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.", "title": "" }, { "docid": "2ed183563bd5cdaafa96b03836883730", "text": "This is an introduction to the Classic Paper on MOSFET scaling by R. Dennardet al., “Design of Ion-Implanted MOSFET’s with Very Small Physical Dimensions,” published in the IEEE Journal of Solid-State Circuitsin October 1974. The history of scaling and its application to very large scale integration (VLSI) MOSFET technology is traced from 1970 to 1998. The role of scaling in the profound improvements in power delay product over the last three decades is analyzed in basic terms.", "title": "" }, { "docid": "ec2d9c12a906eb999e7a178d0f672073", "text": "Passive-dynamic walkers are simple mechanical devices, composed of solid parts connected by joints, that walk stably down a slope. They have no motors or controllers, yet can have remarkably humanlike motions. This suggests that these machines are useful models of human locomotion; however, they cannot walk on level ground. Here we present three robots based on passive-dynamics, with small active power sources substituted for gravity, which can walk on level ground. These robots use less control and less energy than other powered robots, yet walk more naturally, further suggesting the importance of passive-dynamics in human locomotion.", "title": "" }, { "docid": "0ea239ac71e65397d0713fe8c340f67c", "text": "Mutations in leucine-rich repeat kinase 2 (LRRK2) are a common cause of familial and sporadic Parkinson's disease (PD). Elevated LRRK2 kinase activity and neurodegeneration are linked, but the phosphosubstrate that connects LRRK2 kinase activity to neurodegeneration is not known. Here, we show that ribosomal protein s15 is a key pathogenic LRRK2 substrate in Drosophila and human neuron PD models. Phosphodeficient s15 carrying a threonine 136 to alanine substitution rescues dopamine neuron degeneration and age-related locomotor deficits in G2019S LRRK2 transgenic Drosophila and substantially reduces G2019S LRRK2-mediated neurite loss and cell death in human dopamine and cortical neurons. Remarkably, pathogenic LRRK2 stimulates both cap-dependent and cap-independent mRNA translation and induces a bulk increase in protein synthesis in Drosophila, which can be prevented by phosphodeficient T136A s15. These results reveal a novel mechanism of PD pathogenesis linked to elevated LRRK2 kinase activity and aberrant protein synthesis in vivo.", "title": "" }, { "docid": "4ae0bb75493e5d430037ba03fcff4054", "text": "David Moher is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, and the Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Alessandro Liberati is at the Università di Modena e Reggio Emilia, Modena, and the Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy. Jennifer Tetzlaff is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario. Douglas G Altman is at the Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom. Membership of the PRISMA Group is provided in the Acknowledgements.", "title": "" }, { "docid": "f1f72a6d5d2ab8862b514983ac63480b", "text": "Grids are commonly used as histograms to process spatial data in order to detect frequent patterns, predict destinations, or to infer popular places. However, they have not been previously used for GPS trajectory similarity searches or retrieval in general. Instead, slower and more complicated algorithms based on individual point-pair comparison have been used. We demonstrate how a grid representation can be used to compute four different route measures: novelty, noteworthiness, similarity, and inclusion. The measures may be used in several applications such as identifying taxi fraud, automatically updating GPS navigation software, optimizing traffic, and identifying commuting patterns. We compare our proposed route similarity measure, C-SIM, to eight popular alternatives including Edit Distance on Real sequence (EDR) and Frechet distance. The proposed measure is simple to implement and we give a fast, linear time algorithm for the task. It works well under noise, changes in sampling rate, and point shifting. We demonstrate that by using the grid, a route similarity ranking can be computed in real-time on the Mopsi20141 route dataset, which consists of over 6,000 routes. This ranking is an extension of the most similar route search and contains an ordered list of all similar routes from the database. The real-time search is due to indexing the cell database and comes at the cost of spending 80% more memory space for the index. The methods are implemented inside the Mopsi2 route module.", "title": "" }, { "docid": "ab589fb1d97849e95da05d7e9b1d0f4f", "text": "We introduce a new speaker independent method for reducing wind noise in single-channel recordings of noisy speech. The method is based on non-negative sparse coding and relies on a wind noise dictionary which is estimated from an isolated noise recording. We estimate the parameters of the model and discuss their sensitivity. We then compare the algorithm with the classical spectral subtraction method and the Qualcomm-ICSI-OGI noise reduction method. We optimize the sound quality in terms of signal-to-noise ratio and provide results on a noisy speech recognition task.", "title": "" }, { "docid": "18c8fcba57c295568942fa40b605c27e", "text": "The Internet of Things (IoT), an emerging global network of uniquely identifiable embedded computing devices within the existing Internet infrastructure, is transforming how we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. In addition to increased communication efficiency between connected objects, the IoT also brings new security and privacy challenges. Comprehensive measures that enable IoT device authentication and secure access control need to be established. Existing hardware, software, and network protection methods, however, are designed against fraction of real security issues and lack the capability to trace the provenance and history information of IoT devices. To mitigate this shortcoming, we propose an RFID-enabled solution that aims at protecting endpoint devices in IoT supply chain. We take advantage of the connection between RFID tag and control chip in an IoT device to enable data transfer from tag memory to centralized database for authentication once deployed. Finally, we evaluate the security of our proposed scheme against various attacks.", "title": "" }, { "docid": "990fb61d1135b05f88ae02eb71a6983f", "text": "Previous efforts in recommendation of candidates for talent search followed the general pattern of receiving an initial search criteria and generating a set of candidates utilizing a pre-trained model. Traditionally, the generated recommendations are final, that is, the list of potential candidates is not modified unless the user explicitly changes his/her search criteria. In this paper, we are proposing a candidate recommendation model which takes into account the immediate feedback of the user, and updates the candidate recommendations at each step. This setting also allows for very uninformative initial search queries, since we pinpoint the user's intent due to the feedback during the search session. To achieve our goal, we employ an intent clustering method based on topic modeling which separates the candidate space into meaningful, possibly overlapping, subsets (which we call intent clusters) for each position. On top of the candidate segments, we apply a multi-armed bandit approach to choose which intent cluster is more appropriate for the current session. We also present an online learning scheme which updates the intent clusters within the session, due to user feedback, to achieve further personalization. Our offline experiments as well as the results from the online deployment of our solution demonstrate the benefits of our proposed methodology.", "title": "" }, { "docid": "1d12470ab31735721a1f50ac48ac65bd", "text": "In this work, we investigate the role of relational bonds in keeping students engaged in online courses. Specifically, we quantify the manner in which students who demonstrate similar behavior patterns influence each other’s commitment to the course through their interaction with them either explicitly or implicitly. To this end, we design five alternative operationalizations of relationship bonds, which together allow us to infer a scaled measure of relationship between pairs of students. Using this, we construct three variables, namely number of significant bonds, number of significant bonds with people who have dropped out in the previous week, and number of such bonds with people who have dropped in the current week. Using a survival analysis, we are able to measure the prediction strength of these variables with respect to dropout at each time point. Results indicate that higher numbers of significant bonds predicts lower rates of dropout; while loss of significant bonds is associated with higher rates of dropout.", "title": "" }, { "docid": "9ceb26a83e77ac304272625a148c504e", "text": "This article presents the architecture of Junior, a robotic vehicle capable of navigating urban environments autonomously. In doing so, the ve icle is able to select its own routes, perceive and interact with other traffic, and execute various urban driving skills including lane changes, U-turns, parking, a nd merging into moving traffic. The vehicle successfully finished and won second pla ce in the DARPA Urban Challenge, a robot competition organized by the U.S. Gove rnm nt.", "title": "" }, { "docid": "b4b9952da82739fc79ecf949ddcd8e05", "text": "Light field depth estimation is an essential part of many light field applications. Numerous algorithms have been developed using various light field characteristics. However, conventional methods fail when handling noisy scene with occlusion. To remedy this problem, we present a light field depth estimation method which is more robust to occlusion and less sensitive to noise. Novel data costs using angular entropy metric and adaptive defocus response are introduced. Integration of both data costs improves the occlusion and noise invariant capability significantly. Cost volume filtering and graph cut optimization are utilized to improve the accuracy of the depth map. Experimental results confirm that the proposed method is robust and achieves high quality depth maps in various scenes. The proposed method outperforms the state-of-the-art light field depth estimation methods in qualitative and quantitative evaluation.", "title": "" }, { "docid": "afe36d039098b94a77ea58fa56bd895d", "text": "We present a framework to automatically detect and remove shadows in real world scenes from a single image. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The features are learned at the super-pixel level and along the dominant boundaries in the image. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow masks. Using the detected shadow masks, we propose a Bayesian formulation to accurately extract shadow matte and subsequently remove shadows. The Bayesian formulation is based on a novel model which accurately models the shadow generation process in the umbra and penumbra regions. The model parameters are efficiently estimated using an iterative optimization procedure. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.", "title": "" } ]
scidocsrr
71f8b40eb5d01db0acf0389d63853383
On Lower Bounds on the Size of Designs in Compact Symmetric Spaces of Rank 1
[ { "docid": "c69e805751421b516e084498e7fc6f44", "text": "We investigate two extremal problems for polynomials giving upper bounds for spherical codes and for polynomials giving lower bounds for spherical designs, respectively. We consider two basic properties of the solutions of these problems. Namely, we estimate from below the number of double zeros and find zero Gegenbauer coefficients of extremal polynomials. Our results allow us to search effectively for such solutions using a computer. The best polynomials we have obtained give substantial improvements in some cases on the previously known bounds for spherical codes and designs. Some examples are given in Section 6.", "title": "" } ]
[ { "docid": "0cd96187b257ee09060768650432fe6d", "text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.", "title": "" }, { "docid": "9786926819f01ef759ff23b0bec695d1", "text": "Despite the widespread acceptance of strategy's role in mediating an organization's interaction with its environment (Andrews, 1971; Ansoff, 1965; Chandler, 1962; Child, 1972; Miles & Snow, 1978), the scope of research on strategy \"implementation\" has remained quite narrow. Following Chandler (1962), the concern has been predominantly with how a firm's organizational structure and control system are, or might be, related to the degree and nature of its product and geographic diversification (Fouraker & Stopford, 1968; Grinyer, Al-Bazzaz, & Yasai-Ardekani, 1980; Rumelt, 1974; Scott, 1973; Vancil, 1980). However^ strategy formulation and implementation take place not just at the level of the diversified firm as a whole, but also at the level of the divisions/strategic business units (SBUs) comprising the firm (Hambrick, 1980; Hofer & Schendel, 1978). In such a context, the near absence of empirical studies on strategy implementation at the SBU level presents a significant research opportunity.", "title": "" }, { "docid": "cc17b3548d2224b15090ead8c398f808", "text": "Malaria is a global health problem that threatens 300–500 million people and kills more than one million people annually. Disease control is hampered by the occurrence of multi-drug-resistant strains of the malaria parasite Plasmodium falciparum. Synthetic antimalarial drugs and malarial vaccines are currently being developed, but their efficacy against malaria awaits rigorous clinical testing. Artemisinin, a sesquiterpene lactone endoperoxide extracted from Artemisia annua L (family Asteraceae; commonly known as sweet wormwood), is highly effective against multi-drug-resistant Plasmodium spp., but is in short supply and unaffordable to most malaria sufferers. Although total synthesis of artemisinin is difficult and costly, the semi-synthesis of artemisinin or any derivative from microbially sourced artemisinic acid, its immediate precursor, could be a cost-effective, environmentally friendly, high-quality and reliable source of artemisinin. Here we report the engineering of Saccharomyces cerevisiae to produce high titres (up to 100 mg l-1) of artemisinic acid using an engineered mevalonate pathway, amorphadiene synthase, and a novel cytochrome P450 monooxygenase (CYP71AV1) from A. annua that performs a three-step oxidation of amorpha-4,11-diene to artemisinic acid. The synthesized artemisinic acid is transported out and retained on the outside of the engineered yeast, meaning that a simple and inexpensive purification process can be used to obtain the desired product. Although the engineered yeast is already capable of producing artemisinic acid at a significantly higher specific productivity than A. annua, yield optimization and industrial scale-up will be required to raise artemisinic acid production to a level high enough to reduce artemisinin combination therapies to significantly below their current prices.", "title": "" }, { "docid": "68470cd075d9c475b5ff93578ff7e86d", "text": "Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is being able to recognize feelings in the conversation partner and reply accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficult by the paucity of large-scale publicly available datasets both for emotion and relevant dialogues. This work proposes a new task for empathetic dialogue generation and EMPATHETICDIALOGUES, a dataset of 25k conversations grounded in emotional contexts to facilitate training and evaluating dialogue systems. Our experiments indicate that models explicitly leveraging emotion predictions from previous utterances are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores).", "title": "" }, { "docid": "54c9b268dec2ac3006a153f5a43c5d2a", "text": "Data mining provides a way for finding hidden and useful knowledge from the large amount of data .usually we find any information by finding normal trends or distribution of data .But sometimes rare event or data object may provide information which is very interesting to us .Outlier detection is one of the task of data mining .It finds abnormal data point or sequence hidden in the dataset. Data stream is unbounded sequence of data with explicit or implicit temporal context .Data stream is uncertain and dynamic in nature. In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. Outlier detection alludes to task of distinguishing examples. They don’t accommodate set up normal conduct. Anomaly detection in high-dimensional information presents different difficulties coming about because of the \"scourge of dimensionality\". The present perspective is that separation fixation that is propensity of separations in high-dimensional information to end up in perceivable making separation based strategies mark all focuses as similarly great exceptions. This paper gives proof by exhibiting the separation based strategy can create additionally contrasting exception in high dimensional setting. The high dimensional can have an alternate effect, by re-evaluating the idea of opposite closest neighbors. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fuelled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used.", "title": "" }, { "docid": "e0f5f73eb496b77cddc5820fb6306f4b", "text": "Safe handling of dynamic highway and inner city scenarios with autonomous vehicles involves the problem of generating traffic-adapted trajectories. In order to account for the practical requirements of the holistic autonomous system, we propose a semi-reactive trajectory generation method, which can be tightly integrated into the behavioral layer. The method realizes long-term objectives such as velocity keeping, merging, following, stopping, in combination with a reactive collision avoidance by means of optimal-control strategies within the Frenét-Frame [12] of the street. The capabilities of this approach are demonstrated in the simulation of a typical high-speed highway scenario.", "title": "" }, { "docid": "d3bdff7b747b5804971534cfbfd2ce53", "text": "The consequences of security problems are increasingly serious. These problems can now lead to personal injury, prolonged downtime and irreparable damage to capital goods. To achieve this, systems require end-to-end security solutions that cover the layers of connectivity, furthermore, guarantee the privatization and protection of data circulated via networks. In this paper, we will give a definition to the Internet of things, try to dissect its architecture (protocols, layers, entities …), thus giving a state of the art of security in the field of internet of things (Faults detected in each layer …), finally, mention the solutions proposed until now to help researchers start their researches on internet of things security subject.", "title": "" }, { "docid": "caaab1ca0175a6387b1a0c7be7803513", "text": "Probably the most promising breakthroughs in vehicular safety will emerge from intelligent, Advanced Driving Assistance Systems (i-ADAS). Influential research institutions and large vehicle manufacturers work in lockstep to create advanced, on-board safety systems by means of integrating the functionality of existing systems and developing innovative sensing technologies. In this contribution, we describe a portable and scalable vehicular instrumentation designed for on-road experimentation and hypothesis verification in the context of designing i-ADAS prototypes.", "title": "" }, { "docid": "c592f46ffd8286660b9e233127cefea7", "text": "According to literature, penetration pricing is the dominant pricing strategy for network effect markets. In this paper we show that diffusion of products in a network effect market does not only vary with the set of pricing strategies chosen by competing vendors but also strongly depends on the topological structure of the customers' network. This stresses the inappropriateness of classical \"installed base\" models (abstracting from this structure). Our simulations show that although competitive prices tend to be significantly higher in close topology markets, they lead to lower total profit and lower concentration of vendors' profit in these markets.", "title": "" }, { "docid": "8bc17e181e2ca063c44c3ab5dc627993", "text": "Unlike traditional over-the-phone spoken dialog systems (SDSs), modern dialog systems tend to have visual rendering on the device screen as an additional modality to communicate the system’s response to the user. Visual display of the system’s response not only changes human behavior when interacting with devices, but also creates new research areas in SDSs. Onscreen item identification and resolution in utterances is one critical problem to achieve a natural and accurate humanmachine communication. We pose the problem as a classification task to correctly identify intended on-screen item(s) from user utterances. Using syntactic, semantic as well as context features from the display screen, our model can resolve different types of referring expressions with up to 90% accuracy. In the experiments we also show that the proposed model is robust to domain and screen layout changes.", "title": "" }, { "docid": "4d52f2c0ec2f5f96f2676dfc012bc2d8", "text": "We have expanded the field of \"DNA computers\" to RNA and present a general approach for the solution of satisfiability problems. As an example, we consider a variant of the \"Knight problem,\" which asks generally what configurations of knights can one place on an n x n chess board such that no knight is attacking any other knight on the board. Using specific ribonuclease digestion to manipulate strands of a 10-bit binary RNA library, we developed a molecular algorithm and applied it to a 3 x 3 chessboard as a 9-bit instance of this problem. Here, the nine spaces on the board correspond to nine \"bits\" or placeholders in a combinatorial RNA library. We recovered a set of \"winning\" molecules that describe solutions to this problem.", "title": "" }, { "docid": "1f3a41fc5202d636fcfe920603df57e4", "text": "We present data on corporal punishment (CP) by a nationally representative sample of 991 American parents interviewed in 1995. Six types of CP were examined: slaps on the hand or leg, spanking on the buttocks, pinching, shaking, hitting on the buttocks with a belt or paddle, and slapping in the face. The overall prevalence rate (the percentage of parents using any of these types of CP during the previous year) was 35% for infants and reached a peak of 94% at ages 3 and 4. Despite rapid decline after age 5, just over half of American parents hit children at age 12, a third at age 14, and 13% at age 17. Analysis of chronicity found that parents who hit teenage children did so an average of about six times during the year. Severity, as measured by hitting the child with a belt or paddle, was greatest for children age 5-12 (28% of such children). CP was more prevalent among African American and low socioeconomic status parents, in the South, for boys, and by mothers. The pervasiveness of CP reported in this article, and the harmful side effects of CP shown by recent longitudinal research, indicates a need for psychology and sociology textbooks to reverse the current tendency to almost ignore CP and instead treat it as a major aspect of the socialization experience of American children; and for developmental psychologists to be cognizant of the likelihood that parents are using CP far more often than even advocates of CP recommend, and to inform parents about the risks involved.", "title": "" }, { "docid": "062149cd37d1e9f04f32bd6b713f10ab", "text": "Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website (https://github.com/ToniCreswell/InvertingGAN).", "title": "" }, { "docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09", "text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.", "title": "" }, { "docid": "6cddde477f66fd4511da84f4219f058d", "text": "Variational Autoencoder (VAE) has achieved promising success since its emergence. In recent years, its various variants have been developed, especially those works which extend VAE to handle sequential data [1, 2, 5, 7]. However, these works either do not generate sequential latent variables, or encode latent variables only based on inputs from earlier time-steps. We believe that in real-world situations, encoding latent variables at a specific time-step should be based on not only previous observations, but also succeeding samples. In this work, we emphasize such fact and theoretically derive the bidirectional Long Short-Term Memory Variational Autoencoder (bLSTM-VAE), a novel variant of VAE whose encoders and decoders are implemented by bidirectional Long Short-Term Memory (bLSTM) networks. The proposed bLSTM-VAE can encode sequential inputs as an equal-length sequence of latent variables. A latent variable at a specific time-step is encoded by simultaneously processing observations from the first time-step till current time-step in a forward order and observations from current time-step till the last timestep in a backward order. As a result, we consider that the proposed bLSTM-VAE could learn latent variables reliably by mining the contextual information from the whole input sequence. In order to validate the proposed method, we apply it for gesture recognition using 3D skeletal joint data. The evaluation is conducted on the ChaLearn Look at People gesture dataset and NTU RGB+D dataset. The experimental results show that combining with the proposed bLSTM-VAE, the classification network performs better than when combining with a standard VAE, and also outperforms several state-of-the-art methods.", "title": "" }, { "docid": "2fec6840021460bc629572f4fae6fc35", "text": "It is anticipated that SDN coupled with NFV and cloud computing, will become a critical enabling technology to radically revolutionize the way network operators will architect and monetize their infrastructure. On the other hand, the Internet of Things (IoT) is transforming the interaction between cyberspace and the physical space with a tremendous impact on everyday life. The effectiveness of these technologies will require new methodological and engineering approaches due to the impressive scale of the problem and the new challenging requests in terms of performance, security and reliability. This paper presents a simple and general SDN-IoT architecture with NFV implementation with specific choices on where and how to adopt SDN and NFV approaches to address the new challenges of the Internet of Things. The architecture will accelerate innovations in the IoT sector, thanks to its flexibility opening new perspectives for fast deployment of software-enabled worldwide services. The paper also look at the business perspective by considering SDN and NFV as enablers of new added value services on top to the existing infrastructure providing more opportunities for revenues leveraging fast deployed services in the value chain.", "title": "" }, { "docid": "54def135e495c572d3a9de61492681a3", "text": "Event logs or log files form an essential part of any network management and administration setup. While log files are invaluable to a network administrator, the vast amount of data they sometimes contain can be overwhelming and can sometimes hinder rather than facilitate the tasks of a network administrator. For this reason several event clustering algorithms for log files have been proposed, one of which is the event clustering algorithm proposed by Risto Vaarandi, on which his simple log file clustering tool (SLCT) is based. The aim of this work is to develop a visualization tool that can be used to view log files based on the clusters produced by SLCT. The proposed visualization tool, which is called LogView, utilizes treemaps to visualize the hierarchical structure of the clusters produced by SLCT. Our results based on different application log files show that LogView can ease the summarization of vast amount of data contained in the log files. This in turn can help to speed up the analysis of event data in order to detect any security issues on a given application.", "title": "" }, { "docid": "b01cd9a7135dfa82bdcb14bcc52c8e43", "text": "Path queries on a knowledge graph can be used to answer compositional questions such as “What languages are spoken by people living in Lisbon?”. However, knowledge graphs often have missing facts (edges) which disrupts path queries. Recent models for knowledge base completion impute missing facts by embedding knowledge graphs in vector spaces. We show that these models can be recursively applied to answer path queries, but that they suffer from cascading errors. This motivates a new “compositional” training objective, which dramatically improves all models’ ability to answer path queries, in some cases more than doubling accuracy. On a standard knowledge base completion task, we also demonstrate that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43%) and achieving new state-of-the-art results.", "title": "" }, { "docid": "938aecbc66963114bf8753d94f7f58ed", "text": "OBJECTIVE\nTo observe the clinical effect of bee-sting (venom) therapy in the treatment of rheumatoid arthritis (RA).\n\n\nMETHODS\nOne hundred RA patients were randomly divided into medication (control) group and bee-venom group, with 50 cases in each. Patients of control group were treated with oral administration of Methotrexate (MTX, 7.5 mg/w), Sulfasalazine (0.5 g,t. i.d.), Meloxicam (Mobic,7. 5 mg, b. i. d.); and those of bee-venom group treated with Bee-sting of Ashi-points and the above-mentioned Western medicines. Ashi-points were selected according to the position of RA and used as the main acupoints, supplemented with other acupoints according to syndrome differentiation. The treatment was given once every other day and all the treatments lasted for 3 months.\n\n\nRESULTS\nCompared with pre-treatment, scores of joint swelling degree, joint activity, pain, and pressing pain, joint-swelling number, grasp force, 15 m-walking duration, morning stiff duration in bee-venom group and medication group were improved significantly (P<0.05, 0.01). Comparison between two groups showed that after the therapy, scores of joint swelling, pain and pressing pain, joint-swelling number and morning stiff duration, and the doses of the administered MTX and Mobic in bee-venom group were all significantly lower than those in medication group (P<0.05, 0.01); whereas the grasp force in been-venom group was markedly higher than that in medication group (P<0.05). In addition, the relapse rate of bee-venom group was obviously lower than that of medication group (P<0.05; 12% vs 32%).\n\n\nCONCLUSION\nCombined application of bee-venom therapy and medication is superior to simple use of medication in relieving RA, and when bee-sting therapy used, the commonly-taken doses of western medicines may be reduced, and the relapse rate gets lower.", "title": "" }, { "docid": "bc1366e6dec1b7d14a023166063da4ab", "text": "So far, realistic models of interiors have always been designed manually with the help of dedicated software packages. However, the demand for indoor models for different purposes has recently increased, thus a higher degree of automation could better satisfy different applications and speed up the processes. We present a technique for the fully automated modelling of indoor environments from a three dimensional point cloud. The results we achieve are very promising and the method suggested may provide completion to the actual standard for 3D city modelling. Our approach is based on a plane sweep algorithm for the segmentation of a point cloud in order to recognize the planar structures of a room. At first the 3D points that belong to the horizontal structures are tagged by sweeping a virtual plane along the vertical direction and thresholding the distances of each point to the plane. All the points that are not chosen as either floor or ceiling are labelled as potential wall points and are being considered in the following segmentation step to detect the vertical faces. Finally, the floor plan of the room is estimated by intersecting the directions of the walls and finding the vertices that constitute the ground shape. The result generated is a 3D model in CAD format, which perfectly fits the original point cloud.", "title": "" } ]
scidocsrr
c28b2ace3b64b6a709de7a3e3d48af26
McDRAM: Low Latency and Energy-Efficient Matrix Computations in DRAM
[ { "docid": "d716725f2a5d28667a0746b31669bbb7", "text": "This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a value-based approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a state-of-the-art accelerator with no accuracy loss. CNV uses hierarchical data-parallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A co-designed data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a data-parallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of state-of-the-art DNNs for image classification show that CNV improves performance over a state-of-the-art accelerator from 1.24× to 1.55× and by 1.37× on average without any loss in accuracy by removing zero-valued operand multiplications alone. While CNV incurs an area overhead of 4.49%, it improves overall EDP (Energy Delay Product) and ED2P (Energy Delay Squared Product) on average by 1.47× and 2.01×, respectively. The average performance improvements increase to 1.52× without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.", "title": "" }, { "docid": "59ba2709e4f3653dcbd3a4c0126ceae1", "text": "Processing-in-memory (PIM) is a promising solution to address the \"memory wall\" challenges for future computer systems. Prior proposed PIM architectures put additional computation logic in or near memory. The emerging metal-oxide resistive random access memory (ReRAM) has showed its potential to be used for main memory. Moreover, with its crossbar array structure, ReRAM can perform matrix-vector multiplication efficiently, and has been widely studied to accelerate neural network (NN) applications. In this work, we propose a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory. In PRIME, a portion of ReRAM crossbar arrays can be configured as accelerators for NN applications or as normal memory for a larger memory space. We provide microarchitecture and circuit designs to enable the morphable functions with an insignificant area overhead. We also design a software/hardware interface for software developers to implement various NNs on PRIME. Benefiting from both the PIM architecture and the efficiency of using ReRAM for NN computation, PRIME distinguishes itself from prior work on NN acceleration, with significant performance improvement and energy saving. Our experimental results show that, compared with a state-of-the-art neural processing unit design, PRIME improves the performance by ~2360× and the energy consumption by ~895×, across the evaluated machine learning benchmarks.", "title": "" }, { "docid": "f53d8be1ec89cb8a323388496d45dcd0", "text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.", "title": "" } ]
[ { "docid": "0ab46230770ad5977608ebb3257c0cc1", "text": "In this letter, we present a system capable of inferring intent from observed vehicles traversing an unsignalized intersection, a task critical for the safe driving of autonomous vehicles, and beneficial for advanced driver assistance systems. We present a prediction method based on recurrent neural networks that takes data from a Lidar-based tracking system similar to those expected in future smart vehicles. The model is validated on a roundabout, a popular style of unsignalized intersection in urban areas. We also present a very large naturalistic dataset recorded in a typical intersection during two days of operation. This comprehensive dataset is used to demonstrate the performance of the algorithm introduced in this letter. The system produces excellent results, giving a significant 1.3-s prediction window before any potential conflict occurs.", "title": "" }, { "docid": "285d1b4d5a38ecb2e6eb45fbebfa0d0e", "text": "As machine learning (ML) systems become democratized, it becomes increasingly important to help users easily debug their models. However, current data tools are still primitive when it comes to helping users trace model performance problems all the way to the data. We focus on the particular problem of slicing data to identify subsets of the validation data where the model performs poorly. This is an important problem in model validation because the overall model performance can fail to reflect that of the smaller subsets, and slicing allows users to analyze the model performance on a more granularlevel. Unlike general techniques (e.g., clustering) that can find arbitrary slices, our goal is to find interpretable slices (which are easier to take action compared to arbitrary subsets) that are problematic and large. We propose Slice Finder, which is an interactive framework for identifying such slices using statistical techniques. Applications include diagnosing model fairness and fraud detection, where identifying slices that are interpretable to humans is crucial.", "title": "" }, { "docid": "0780f9240aaaa6b45cf4edf1d0de15ec", "text": "Adaptive Case Management (ACM) is a new paradigm that facilitates the coordination of knowledge work through case handling. Current ACM systems, however, lack support of providing sophisticated user guidance for next step recommendations and predictions about the case future. In recent years, process mining research developed approaches to make recommendations and predictions based on event logs readily available in process-aware information systems. This paper builds upon those approaches and integrates them into an existing ACM solution. The research goal is to design and develop a prototype that gives next step recommendations and predictions based on process mining techniques in ACM systems. The models proposed, recommend actions that shorten the case running time, mitigate deadline transgressions, support case goals and have been used in former cases with similar properties. They further give case predictions about the remaining time, possible deadline violations, and whether the current case path supports given case goals. A final evaluation proves that the prototype is indeed capable of making proper recommendations and predictions. In addition, starting points for further improvement are discussed.", "title": "" }, { "docid": "571c7cb6e0670539a3effbdd65858d2a", "text": "When writing software, developers often employ abbreviations in identifier names. In fact, some abbreviations may never occur with the expanded word, or occur more often in the code. However, most existing program comprehension and search tools do little to address the problem of abbreviations, and therefore may miss meaningful pieces of code or relationships between software artifacts. In this paper, we present an automated approach to mining abbreviation expansions from source code to enhance software maintenance tools that utilize natural language information. Our scoped approach uses contextual information at the method, program, and general software level to automatically select the most appropriate expansion for a given abbreviation. We evaluated our approach on a set of 250 potential abbreviations and found that our scoped approach provides a 57% improvement in accuracy over the current state of the art.", "title": "" }, { "docid": "ec0f7117acc67ae85b381b1d5f2dc5fa", "text": "We propose a generalized focal loss function based on the Tversky index to address the issue of data imbalance in medical image segmentation. Compared to the commonly used Dice loss, our loss function achieves a better trade off between precision and recall when training on small structures such as lesions. To evaluate our loss function, we improve the attention U-Net model by incorporating an image pyramid to preserve contextual features. We experiment on the BUS 2017 dataset and ISIC 2018 dataset where lesions occupy 4.84% and 21.4% of the images area and improve segmentation accuracy when compared to the standard U-Net by 25.7% and 3.6%, respectively.", "title": "" }, { "docid": "fb173d15e079fcdf0cc222f558713f9c", "text": "Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art.", "title": "" }, { "docid": "51be236c79d1af7a2aff62a8049fba34", "text": "BACKGROUND\nAs the number of children diagnosed with autism continues to rise, resources must be available to support parents of children with autism and their families. Parents need help as they assess their unique situations, reach out for help in their communities, and work to decrease their stress levels by using appropriate coping strategies that will benefit their entire family.\n\n\nMETHODS\nA descriptive, correlational, cross-sectional study was conducted with 75 parents/primary caregivers of children with autism. Using the McCubbin and Patterson model of family behavior, adaptive behaviors of children with autism, family support networks, parenting stress, and parent coping were measured.\n\n\nFINDINGS AND CONCLUSIONS\nAn association between low adaptive functioning in children with autism and increased parenting stress creates a need for additional family support as parents search for different coping strategies to assist the family with ongoing and new challenges. Professionals should have up-to-date knowledge of the supports available to families and refer families to appropriate resources to avoid overwhelming them with unnecessary and inappropriate referrals.", "title": "" }, { "docid": "136481c06ef00d0bd5bb7f45a8655c35", "text": "The spread of aggressive tweets, status and comments on social network are increasing gradually. People are using social media networks as a virtual platform to troll, objurgate, blaspheme and revile one another. These activities are spreading animosity in race-to-race, religion to religion etc. So, these comments should be identified and blocked on social networks. This work focuses on extracting comments from social networks and analyzes those comments whether they convey any blaspheme or revile in meaning. Comments are classified into three distinct classes; offensive, hate speech and neither. Document similarity analyses are done to identify the correlations among the documents. A well defined text pre-processing analysis is done to create an optimized word vector to train the classification model. Finally, the proposed model categorizes the comments into their respective classes with more than 93% accuracy.", "title": "" }, { "docid": "8047c0ba3b0a2838e7df95c8246863f4", "text": "Neurons in the ventral premotor cortex of the monkey encode the locations of visual, tactile, auditory and remembered stimuli. Some of these neurons encode the locations of stimuli with respect to the arm, and may be useful for guiding movements of the arm. Others encode the locations of stimuli with respect to the head, and may be useful for guiding movements of the head. We suggest that a general principle of sensory-motor integration is that the space surrounding the body is represented in body-part-centered coordinates. That is, there are multiple coordinate systems used to guide movement, each one attached to a different part of the body. This and other recent evidence from both monkeys and humans suggest that the formation of spatial maps in the brain and the guidance of limb and body movements do not proceed in separate stages but are closely integrated in both the parietal and frontal lobes.", "title": "" }, { "docid": "e8fb4848c8463bfcbe4a09dfeda52584", "text": "A highly efficient rectifier for wireless power transfer in biomedical implant applications is implemented using 0.18-m CMOS technology. The proposed rectifier with active nMOS and pMOS diodes employs a four-input common-gate-type capacitively cross-coupled latched comparator to control the reverse leakage current in order to maximize the power conversion efficiency (PCE) of the rectifier. The designed rectifier achieves a maximum measured PCE of 81.9% at 13.56 MHz under conditions of a low 1.5-Vpp RF input signal with a 1- k output load resistance and occupies 0.009 mm2 of core die area.", "title": "" }, { "docid": "dc34a320af0e7a104686a36f7a6101c3", "text": "In this paper, the proposed SIMO (Single input multiple outputs) DC-DC converter based on coupled inductor. The required controllable high DC voltage and intermediate DC voltage with high voltage gain from low input voltage sources, like renewable energy, can be achieved easily from the proposed converter. The high voltage DC bus can be used as the leading power for a DC load and intermediate voltage DC output terminals can charge supplementary power sources like battery modules. This converter operates simply with one power switch. It incorporates the techniques of voltage clamping (VC) and zero current switching (ZCS). The simulation result in PSIM software shows that the aims of high efficiency, high voltage gain, several output voltages with unlike levels, are achieved.", "title": "" }, { "docid": "a6e4a1912f2a0e58f97f4b5a5ab93dec", "text": "An adaptive fuzzy inference neural network (AFINN) is proposed in this paper. It has self-construction ability, parameter estimation ability and rule extraction ability. The structure of AFINN is formed by the following four phases: (1) initial rule creation, (2) selection of important input elements, (3) identification of the network structure and (4) parameter estimation using LMS (least-mean square) algorithm. When the number of input dimension is large, the conventional fuzzy systems often cannot handle the task correctly because the degree of each rule becomes too small. AFINN solves such a problem by modification of the learning and inference algorithm. 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "81b242e3c98eaa20e3be0a9777aa3455", "text": "Humor is an integral part of human lives. Despite being tremendously impactful, it is perhaps surprising that we do not have a detailed understanding of humor yet. As interactions between humans and AI systems increase, it is imperative that these systems are taught to understand subtleties of human expressions such as humor. In this work, we are interested in the question - what content in a scene causes it to be funny? As a first step towards understanding visual humor, we analyze the humor manifested in abstract scenes and design computational models for them. We collect two datasets of abstract scenes that facilitate the study of humor at both the scene-level and the object-level. We analyze the funny scenes and explore the different types of humor depicted in them via human studies. We model two tasks that we believe demonstrate an understanding of some aspects of visual humor. The tasks involve predicting the funniness of a scene and altering the funniness of a scene. We show that our models perform well quantitatively, and qualitatively through human studies. Our datasets are publicly available.", "title": "" }, { "docid": "997603462c825d4a3d61683adc2003c6", "text": "A new feeding technique for a broadband circularly polarized aperture-coupled patch antenna is proposed operating at X-band. The stacked microstrip antennas are used for broad bandwidth and flat gain. These broadband antennas fed by slot-coupled quadrature hybrid have dual-offset feedlines for low cross-polarization level. The quadrature hybrid has multi-section with multi-layers. The grounds of coupler and antenna are connected by via. The simulated 10 dB return loss bandwidth is 35.5% from 8.1 to 11.6 GHz and the 3 dB axial ratio (AR) bandwidth is 35%.", "title": "" }, { "docid": "32bf3e0ce6f9bc8864bd905ffebcfcce", "text": "BACKGROUND AND PURPOSE\nTo improve the accuracy of early postonset prediction of motor recovery in the flaccid hemiplegic arm, the effects of change in motor function over time on the accuracy of prediction were evaluated, and a prediction model for the probability of regaining dexterity at 6 months was developed.\n\n\nMETHODS\nIn 102 stroke patients, dexterity and paresis were measured with the Action Research Arm Test, Motricity Index, and Fugl-Meyer motor evaluation. For model development, 23 candidate determinants were selected. Logistic regression analysis was used for prognostic factors and model development.\n\n\nRESULTS\nAt 6 months, some dexterity in the paretic arm was found in 38%, and complete functional recovery was seen in 11.6% of the patients. Total anterior circulation infarcts, right hemisphere strokes, homonymous hemianopia, visual gaze deficit, visual inattention, and paresis were statistically significant related to a poor arm function. Motricity Index leg scores of at least 25 points in the first week and Fugl-Meyer arm scores of 11 points in the second week increasing to 19 points in the fourth week raised the probability of developing some dexterity (Action Research Arm Test >or=10 points) from 74% (positive predictive value [PPV], 0.74; 95% confidence interval [CI], 0.63 to 0.86) to 94% (PPV, 0.83; 95% CI, 0.76 to 0.91) at 6 months. No change in probabilities of prediction dexterity was found after 4 weeks.\n\n\nCONCLUSIONS\nBased on the Fugl-Meyer scores of the flaccid arm, optimal prediction of arm function outcome at 6 months can be made within 4 weeks after onset. Lack of voluntary motor control of the leg in the first week with no emergence of arm synergies at 4 weeks is associated with poor outcome at 6 months.", "title": "" }, { "docid": "e8cd97674866f4ef6aa33445a5cebea8", "text": "The ever increasing popularity of social networks and the ever easier photo taking and sharing experience have led to unprecedented concerns on privacy infringement. Inspired by the fact that the Robot Exclusion Protocol, which regulates web crawlers' behavior according a per-site deployed robots.txt, and cooperative practices of major search service providers, have contributed to a healthy web search industry, in this paper, we propose Privacy Expressing and Respecting Protocol (PERP) that consists of a Privacy.tag -- a physical tag that enables a user to explicitly and flexibly express their privacy deal, and Privacy Respecting Sharing Protocol (PRSP) -- a protocol that empowers the photo service provider to exert privacy protection following users' policy expressions, to mitigate the public's privacy concern, and ultimately create a healthy photo-sharing ecosystem in the long run. We further design an exemplar Privacy.Tag using customized yet compatible QR-code, and implement the Protocol and study the technical feasibility of our proposal. Our evaluation results confirm that PERP and PRSP are indeed feasible and incur negligible computation overhead.", "title": "" }, { "docid": "8b3557219674c8441e63e9b0ab459c29", "text": "his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc. . Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning . I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool.", "title": "" }, { "docid": "836ac0267a67fd2e7657a5893975b023", "text": "Managing trust efficiently and effectively is critical to facilitating cooperation or collaboration and decision making tasks in tactical networks while meeting system goals such as reliability, availability, or scalability. Delay tolerant networks are often encountered in military network environments where end-to-end connectivity is not guaranteed due to frequent disconnection or delay. This work proposes a provenance-based trust framework for efficiency in resource consumption as well as effectiveness in trust evaluation. Provenance refers to the history of ownership of a valued object or information. We adopt the concept of provenance in that trustworthiness of an information provider affects that of information, and vice-versa. The proposed trust framework takes a data-driven approach to reduce resource consumption in the presence of selfish or malicious nodes. This work adopts a model-based method to evaluate the proposed trust framework using Stochastic Petri Nets. The results show that the proposed trust framework achieves desirable accuracy of trust evaluation of nodes compared with an existing scheme while consuming significantly less communication overhead.", "title": "" }, { "docid": "03966c28d31e1c45896eab46a1dcce57", "text": "For many applications it is useful to sample from a nite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M suuciently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be diicult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the algorithm itself. If the state space has a partial order that is preserved under the moves of the Markov chain, then the coupling is often particularly eecient. Using our approach one can sample from the Gibbs distributions associated with various statistical mechanics models (including Ising, random-cluster, ice, and dimer) or choose uniformly at random from the elements of a nite distributive lattice.", "title": "" }, { "docid": "107133e9b114526ac100714599305c20", "text": "While clinical text NLP systems have become very effective in recognizing named entities in clinical text and mapping them to standardized terminologies in the normalization process, there remains a gap in the ability of extractors to combine entities together into a complete semantic representation of medical concepts that contain multiple attributes each of which has its own set of allowed named entities or values. Furthermore, additional domain knowledge may be required to determine the semantics of particular tokens in the text that take on special meanings in relation to this concept. This research proposes an approach that provides ontological mappings of the surface forms of medical concepts that are of the UMLS semantic class signs/symptoms. The mappings are used to extract and encode the constituent set of named entities into interoperable semantic structures that can be linked to other structured and unstructured data for reuse in research and analysis.", "title": "" } ]
scidocsrr
fe654cb752b04fc399c6607f448f1551
Do They All Look the Same? Deciphering Chinese, Japanese and Koreans by Fine-Grained Deep Learning
[ { "docid": "48f784f6fe073c55efbc990b2a2257c6", "text": "Faces convey a wealth of social signals, including race, expression, identity, age and gender, all of which have attracted increasing attention from multi-disciplinary research, such as psychology, neuroscience, computer science, to name a few. Gleaned from recent advances in computer vision, computer graphics, and machine learning, computational intelligence based racial face analysis has been particularly popular due to its significant potential and broader impacts in extensive real-world applications, such as security and defense, surveillance, human computer interface (HCI), biometric-based identification, among others. These studies raise an important question: How implicit, non-declarative racial category can be conceptually modeled and quantitatively inferred from the face? Nevertheless, race classification is challenging due to its ambiguity and complexity depending on context and criteria. To address this challenge, recently, significant efforts have been reported toward race detection and categorization in the community. This survey provides a comprehensive and critical review of the state-of-the-art advances in face-race perception, principles, algorithms, and applications. We first discuss race perception problem formulation and motivation, while highlighting the conceptual potentials of racial face processing. Next, taxonomy of feature representational models, algorithms, performance and racial databases are presented with systematic discussions within the unified learning scenario. Finally, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potentially important cross-cutting themes and research directions for the issue of learning race from face.", "title": "" }, { "docid": "225204d66c371372debb3bb2a37c795b", "text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.", "title": "" } ]
[ { "docid": "d422afa99137d5e09bd47edeb770e872", "text": "OBJECTIVE\nFood Insecurity (FI) occurs in 21% of families with children and adolescents in the United States, but the potential developmental and behavioral implications of this prevalent social determinant of health have not been comprehensively elucidated. This systematic review aims to examine the association between FI and childhood developmental and behavioral outcomes in western industrialized countries.\n\n\nMETHOD\nThis review provides a critical summary of 23 peer reviewed articles from developed countries on the associations between FI and adverse childhood developmental behavioral outcomes including early cognitive development, academic performance, inattention, externalizing behaviors, and depression in 4 groups-infants and toddlers, preschoolers, school age, and adolescents. Various approaches to measuring food insecurity are delineated. Potential confounding and mediating variables of this association are compared across studies. Alternate explanatory mechanisms of observed effects and need for further research are discussed.\n\n\nRESULTS\nThis review demonstrates that household FI, even at marginal levels, is associated with children's behavioral, academic, and emotional problems from infancy to adolescence across western industrialized countries - even after controlling for confounders.\n\n\nCONCLUSIONS\nWhile the American Academy of Pediatrics already recommends routine screening for food insecurity during health maintenance visits, the evidence summarized here should encourage developmental behavioral health providers to screen for food insecurity in their practices and intervene when possible. Conversely, children whose families are identified as food insecure in primary care settings warrant enhanced developmental behavioral assessment and possible intervention.", "title": "" }, { "docid": "e53c7f8890d3bf49272e08d4446703a4", "text": "In orthogonal frequency-division multiplexing (OFDM) systems, it is generally assumed that the channel response is static in an OFDM symbol period. However, the assumption does not hold in high-mobility environments. As a result, intercarrier interference (ICI) is induced, and system performance is degraded. A simple remedy for this problem is the application of the zero-forcing (ZF) equalizer. Unfortunately, the direct ZF method requires the inversion of an N times N ICI matrix, where N is the number of subcarriers. When N is large, the computational complexity can become prohibitively high. In this paper, we first propose a low-complexity ZF method to solve the problem in single-input-single-output (SISO) OFDM systems. The main idea is to explore the special structure inherent in the ICI matrix and apply Newton's iteration for matrix inversion. With our formulation, fast Fourier transforms (FFTs) can be used in the iterative process, reducing the complexity from O (N3) to O (N log2 N). Another feature of the proposed algorithm is that it can converge very fast, typically in one or two iterations. We also analyze the convergence behavior of the proposed method and derive the theoretical output signal-to-interference-plus-noise ratio (SINR). For a multiple-input-multiple-output (MIMO) OFDM system, the complexity of the ZF method becomes more intractable. We then extend the method proposed for SISO-OFDM systems to MIMO-OFDM systems. It can be shown that the computational complexity can be reduced even more significantly. Simulations show that the proposed methods perform almost as well as the direct ZF method, while the required computational complexity is reduced dramatically.", "title": "" }, { "docid": "e787357f66066c09cf3a8920edef1244", "text": "The authors argue that a new six-dimensional framework for personality structure--the HEXACO model--constitutes a viable alternative to the well-known Big Five or five-factor model. The new model is consistent with the cross-culturally replicated finding of a common six-dimensional structure containing the factors Honesty-Humility (H), Emotionality (E), eExtraversion (X), Agreeableness (A), Conscientiousness (C), and Openness to Experience (O). Also, the HEXACO model predicts several personality phenomena that are not explained within the B5/FFM, including the relations of personality factors with theoretical biologists' constructs of reciprocal and kin altruism and the patterns of sex differences in personality traits. In addition, the HEXACO model accommodates several personality variables that are poorly assimilated within the B5/FFM.", "title": "" }, { "docid": "52f20c62f13274d473de5aa179ccf37b", "text": "The number of Internet auction shoppers is rapidly growing. However, online auction customers may suffer from auction fraud, sometimes without even noticing it. In-auction fraud differs from preand post-auction fraud in that it happens in the bidding period of an active auction. Since the in-auction fraud strategies are subtle and complex, it makes the fraudulent behavior more difficult to discover. Researchers from disciplines such as computer science and economics have proposed a number of methods to deal with in-auction fraud. In this paper, we summarize commonly seen indicators of in-auction fraud, provide a review of significant contributions in the literature of Internet in-auction fraud, and identify future challenging research tasks.", "title": "" }, { "docid": "ce29ddfd7b3d3a28ddcecb7a5bb3ac8e", "text": "Steganography consist of concealing secret information in a cover object to be sent over a public communication channel. It allows two parties to share hidden information in a way that no intruder can detect the presence of hidden information. This paper presents a novel steganography approach based on pixel location matching of the same cover image. Here the information is not directly embedded within the cover image but a sequence of 4 bits of secret data is compared to the 4 most significant bits (4MSB) of the cover image pixels. The locations of the matching pixels are taken to substitute the 2 least significant bits (2LSB) of the cover image pixels. Since the data are not directly hidden in cover image, the proposed approach is more secure and difficult to break. Intruders cannot intercept it by using common LSB techniques.", "title": "" }, { "docid": "f6ae71fee81a8560f37cb0dccfd1e3cd", "text": "Linguistic research to date has determined many of the principles that govern the structure of the spatial schemas represented by closed-class forms across the world’s languages. contributing to this cumulative understanding have, for example, been Gruber 1965, Fillmore 1968, Leech 1969, Clark 1973, Bennett 1975, Herskovits 1982, Jackendoff 1983, Zubin and Svorou 1984, as well as myself, Talmy 1983, 2000a, 2000b). It is now feasible to integrate these principles and to determine the comprehensive system they belong to for spatial structuring in spoken language. The finding here is that this system has three main parts: the componential, the compositional, and the augmentive.", "title": "" }, { "docid": "123760f70d7f609dfe3cf3158a5cc23f", "text": "We investigate national dialect identification, the task of classifying English documents according to their country of origin. We use corpora of known national origin as a proxy for national dialect. In order to identify general (as opposed to corpus-specific) characteristics of national dialects of English, we make use of a variety of corpora of different sources, with inter-corpus variation in length, topic and register. The central intuition is that features that are predictive of national origin across different data sources are features that characterize a national dialect. We examine a number of classification approaches motivated by different areas of research, and evaluate the performance of each method across 3 national dialects: Australian, British, and Canadian English. Our results demonstrate that there are lexical and syntactic characteristics of each national dialect that are consistent across data sources.", "title": "" }, { "docid": "7835bb8463eff6a7fbeec256068e1f09", "text": "Efforts to incorporate intelligence into the user interface have been underway for decades, but the commercial impact of this work has not lived up to early expectations, and is not immediately apparent. This situation appears to be changing. However, so far the most interesting intelligent user interfaces (IUIS) have tended to use minimal or simplistic AI. In this panel we consider whether more or less AI is the key to the development of compelling IUIS. The panelists will present examples of compelling IUIS that use a selection of AI techniques, mostly simple, but some complex. Each panelist will then comment on the merits of different kinds and quantities of AI in the development of pragmatic interface technology.", "title": "" }, { "docid": "fd2b1d2a4d44f0535ceb6602869ffe1c", "text": "A conventional FCM algorithm does not fully utilize the spatial information in the image. In this paper, we present a fuzzy c-means (FCM) algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. The advantages of the new method are the following: (1) it yields regions more homogeneous than those of other methods, (2) it reduces the spurious blobs, (3) it removes noisy spots, and (4) it is less sensitive to noise than other techniques. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information.", "title": "" }, { "docid": "90401f0e283bea2daed999de00dcacc5", "text": "Steganography is a branch of information security which deals with transmission of message without being detected. Message, to be send, is embedded in a cover file. Different types of digital can be used as cover object, we used (.WAV) audio as our cover file in the research work. The objective of steganography is to shield the fact that the message exists in the transmission medium. Many algorithms have so far derived for this purpose can be categorized in terms of their embedding technique, time and space complexity. LSB is the acronym of „Least Significant Bit‟, is one of the algorithm that is considered as the easiest in way of hiding information in a digital media, also it has good efficiency. It perform its task by embedding secret message in the least significant bits of each data sample of audio file. Ease of cracking this algorithm makes it more prone to visual and statistical attacks. Keeping this in mind few improvisation are being done on LSB algorithm that reduces the ease of cracking message. Modified version of LSB algorithm which we call as „MODIFIED LSB ALGORITHM‟ uses the pseudo-random number generator to spread the secret message over the cover in a random manner. This algorithm will be more immune to statistical attacks without affecting its efficiency significantly.", "title": "" }, { "docid": "fd455e27b023d849c59526655c5060da", "text": "Face Detection is an important step in any face recognition systems, for the purpose of localizing and extracting face region from the rest of the images. There are many techniques, which have been proposed from simple edge detection techniques to advance techniques such as utilizing pattern recognition approaches. This paper evaluates two methods of face detection, her features and Local Binary Pattern features based on detection hit rate and detection speed. The algorithms were tested on Microsoft Visual C++ 2010 Express with OpenCV library. The experimental results show that Local Binary Pattern features are most efficient and reliable for the implementation of a real-time face detection system.", "title": "" }, { "docid": "4ea07335d42a859768565c8d88cd5280", "text": "This paper brings together research from two different fields – user modelling and web ontologies – in attempt to demonstrate how recent semantic trends in web development can be combined with the modern technologies of user modelling. Over the last several years, a number of user-adaptive systems have been exploiting ontologies for the purposes of semantics representation, automatic knowledge acquisition, domain and user model visualisation and creation of interoperable and reusable architectural solutions. Before discussing these projects, we first overview the underlying user modelling and ontological technologies. As an example of the project employing ontology-based user modelling, we present an experiment design for translation of overlay student models for relative domains by means of ontology mapping.", "title": "" }, { "docid": "141cab8897e01abef28bf2c2a78874e1", "text": "Botnet is a network of compromised computers controlled by the attacker(s) from remote locations via Command and Control (C&C) channels. The botnets are one of the largest global threats to the Internet-based commercial and social world. The decentralized Peer-to-Peer (P2P) botnets have appeared in the recent past and are growing at a faster pace. These P2P botnets are continuously evolving from diverse C&C protocols using hybrid structures and are turning to be more complicated and stealthy. In this paper, we present a comprehensive survey of the evolution, functionalities, modelling and the development life cycle of P2P botnets. Further, we investigate the various P2P botnet detection approaches. Finally, discuss the key research challenges useful for the research initiatives. This paper is useful in understanding the P2P botnets and gives an insight into the usefulness and limitations of the various P2P botnet detection techniques proposed by the researchers. The study will enable the researchers toward proposing the more useful detection techniques.", "title": "" }, { "docid": "df679dcd213842a786c1ad9587c66f77", "text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in", "title": "" }, { "docid": "11229bf95164064f954c25681c684a16", "text": "This article proposes integrating the insights generated by framing, priming, and agenda-setting research through a systematic effort to conceptualize and understand their larger implications for political power and democracy. The organizing concept is bias, that curiously undertheorized staple of public discourse about the media. After showing how agenda setting, framing and priming fit together as tools of power, the article connects them to explicit definitions of news slant and the related but distinct phenomenon of bias. The article suggests improved measures of slant and bias. Properly defined and measured, slant and bias provide insight into how the media influence the distribution of power: who gets what, when, and how. Content analysis should be informed by explicit theory linking patterns of framing in the media text to predictable priming and agenda-setting effects on audiences. When unmoored by such underlying theory, measures and conclusions of media bias are suspect.", "title": "" }, { "docid": "d529d1052fce64ae05fbc64d2b0450ab", "text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f4df305ad32ebdd1006eefdec6ee7ca3", "text": "In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.", "title": "" }, { "docid": "d5330d3045a27f2c59ef01903b87a54e", "text": "Industrial Control and SCADA (Supervisory Control and Data Acquisition) networks control critical infrastructure such as power plants, nuclear facilities, and water supply systems. These systems are increasingly the target of cyber attacks by threat actors of different kinds, with successful attacks having the potential to cause damage, cost and injury/loss of life. As a result, there is a strong need for enhanced tools to detect cyber threats in SCADA networks. This paper makes a number of contributions to advance research in this area. First, we study the level of support for SCADA protocols in well-known open source intrusion detection systems (IDS). Second, we select a specific IDS, Suricata, and enhance it to include support for detecting threats against SCADA systems running the EtherNet/IP (ENIP) industrial control protocol. Finally, we conduct a traffic-based study to evaluate the performance of the new ENIP module in Suricata - analyzing its performance in low performance hardware systems.", "title": "" }, { "docid": "86820c43e63066930120fa5725b5b56d", "text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.", "title": "" } ]
scidocsrr
004dc49233e1387327a6a6bc39dfa513
Robust Single-Image Super-Resolution Based on Adaptive Edge-Preserving Smoothing Regularization
[ { "docid": "14fdf8fa41d46ad265b48bbc64a2d3cc", "text": "Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts", "title": "" }, { "docid": "d4c7493c755a3fde5da02e3f3c873d92", "text": "Edge-directed image super resolution (SR) focuses on ways to remove edge artifacts in upsampled images. Under large magnification, however, textured regions become blurred and appear homogenous, resulting in a super-resolution image that looks unnatural. Alternatively, learning-based SR approaches use a large database of exemplar images for “hallucinating” detail. The quality of the upsampled image, especially about edges, is dependent on the suitability of the training images. This paper aims to combine the benefits of edge-directed SR with those of learning-based SR. In particular, we propose an approach to extend edge-directed super-resolution to include detail from an image/texture example provided by the user (e.g., from the Internet). A significant benefit of our approach is that only a single exemplar image is required to supply the missing detail – strong edges are obtained in the SR image even if they are not present in the example image due to the combination of the edge-directed approach. In addition, we can achieve quality results at very large magnification, which is often problematic for both edge-directed and learning-based approaches.", "title": "" } ]
[ { "docid": "378dcab60812075f58534d8dca1c5f33", "text": "Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in real-time. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.", "title": "" }, { "docid": "448d70d9f5f8e5fcb8d04d355a02c8f9", "text": "Structural health monitoring (SHM) using wireless sensor networks (WSNs) has gained research interest due to its ability to reduce the costs associated with the installation and maintenance of SHM systems. SHM systems have been used to monitor critical infrastructure such as bridges, high-rise buildings, and stadiums and has the potential to improve structure lifespan and improve public safety. The high data collection rate of WSNs for SHM pose unique network design challenges. This paper presents a comprehensive survey of SHM using WSNs outlining the algorithms used in damage detection and localization, outlining network design challenges, and future research directions. Solutions to network design problems such as scalability, time synchronization, sensor placement, and data processing are compared and discussed. This survey also provides an overview of testbeds and real-world deployments of WSNs for SH.", "title": "" }, { "docid": "f815f4aea585dd112e37f2c7fd9aa8f6", "text": "Hundreds of millions of people play computer games every day. For them, game content—from 3D objects to abstract puzzles—plays a major entertainment role. Manual labor has so far ensured that the quality and quantity of game content matched the demands of the playing community, but is facing new scalability challenges due to the exponential growth over the last decade of both the gamer population and the production costs. Procedural Content Generation for Games (PCG-G) may address these challenges by automating, or aiding in, game content generation. PCG-G is difficult, since the generator has to create the content, satisfy constraints imposed by the artist, and return interesting instances for gamers. Despite a large body of research focusing on PCG-G, particularly over the past decade, ours is the first comprehensive survey of the field of PCG-G. We first introduce a comprehensive, six-layered taxonomy of game content: bits, space, systems, scenarios, design, and derived. Second, we survey the methods used across the whole field of PCG-G from a large research body. Third, we map PCG-G methods to game content layers; it turns out that many of the methods used to generate game content from one layer can be used to generate content from another. We also survey the use of methods in practice, that is, in commercial or prototype games. Fourth and last, we discuss several directions for future research in PCG-G, which we believe deserve close attention in the near future.", "title": "" }, { "docid": "ffb7b58d947aa15cd64efbadb0f9543d", "text": "A multi-armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi-armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are ‘optimal’ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright q 2010 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "f148ed07ef31d81eee08fd0f5a6b6ea8", "text": "Cyber-physical systems often consist of entities that interact with each other over time. Meanwhile, as part of the continued digitization of industrial processes, various sensor technologies are deployed that enable us to record time-varying attributes (a.k.a., time series) of such entities, thus producing correlated time series. To enable accurate forecasting on such correlated time series, this paper proposes two models that combine convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first model employs a CNN on each individual time series, combines the convoluted features, and then applies an RNN on top of the convoluted features in the end to enable forecasting. The second model adds additional auto-encoders into the individual CNNs, making the second model a multi-task learning model, which provides accurate and robust forecasting. Experiments on a large real-world correlated time series data set suggest that the proposed two models are effective and outperform baselines in most settings.", "title": "" }, { "docid": "681586ff70dda851f52a280093de9989", "text": "Due to large scale and complexity of big data, mining the big data using a single personal computer is a difficult problem. With increasing in the size of databases, parallel computing systems can cause considerable advantages in the data mining applications by means of the exploitation of data mining algorithms. Parallelization of association rule mining algorithms is an important task in data mining to mine frequent patterns from transaction databases. These algorithms either distribute database horizontally or increase number of CPU to reduce execution time of frequent pattern mining. In this paper, a novel frequent itemset mining algorithm, namely Horizontal parallel-Apriori (HP-Apriori), is proposed that divides database both horizontally and vertically with partitioning mining process into four sub-processes so that all four tasks are performed in parallel way. Also the HP-Apriori tries to speed up the mining process by an index file that is generated in the first step of algorithm. The proposed algorithm has been compared with Count Distribution (CD) in terms of execution time and speedup criteria on the four real datasets. Experimental results demonstrated that the HP-Apriori outperforms over CD in terms of minimizing execution time and maximizing speedup in high scalability.", "title": "" }, { "docid": "884b880ac8f8c406baec25d616643ac0", "text": "Repeated retrieval practice is a powerful learning tool for promoting long-term retention, but students use this tool ineffectively when regulating their learning. The current experiments evaluated the efficacy of a minimal intervention aimed at improving students' self-regulated use of repeated retrieval practice. Across 2 experiments, students made decisions about when to study, engage in retrieval practice, or stop learning a set of foreign language word pairs. Some students received direct instruction about how to use repeated retrieval practice. These instructions emphasized the mnemonic benefits of retrieval practice over a less effective strategy (restudying) and told students how to use repeated retrieval practice to maximize their performance-specifically, that they should recall a translation correctly 3 times during learning. This minimal intervention promoted more effective self-regulated use of retrieval practice and better retention of the translations compared to a control group that received no instruction. Students who experienced this intervention also showed potential for long-term changes in self-regulated learning: They spontaneously used repeated retrieval practice 1 week later to learn new materials. These results provide a promising first step for developing guidelines for teaching students how to regulate their learning more effectively using repeated retrieval practice. (PsycINFO Database Record", "title": "" }, { "docid": "78cf38ee62d5501c3119552cb70b0997", "text": "This document discusses the status of research on detection and prevention of financial fraud undertaken as part of the IST European Commission funded FF POIROT (Financial Fraud Prevention Oriented Information Resources Using Ontology Technology) project. A first task has been the specification of the user requirements that define the functionality of the financial fraud ontology to be designed by the FF POIROT partners. It is claimed here that modeling fraudulent activity involves a mixture of law and facts as well as inferences about facts present, facts presumed or facts missing. The purpose of this paper is to explain this abstract model and to specify the set of user requirements.", "title": "" }, { "docid": "680c621ebc0dd6f762abb8df9871070e", "text": "Methods for learning to search for structured prediction typically imitate a reference policy, with existing theoretical guarantees demonstrating low regret compared to that reference. This is unsatisfactory in many applications where the reference policy is suboptimal and the goal of learning is to improve upon it. Can learning to search work even when the reference is poor? We provide a new learning to search algorithm, LOLS, which does well relative to the reference policy, but additionally guarantees low regret compared to deviations from the learned policy: a local-optimality guarantee. Consequently, LOLS can improve upon the reference policy, unlike previous algorithms. This enables us to develop structured contextual bandits, a partial information structured prediction setting with many potential applications.", "title": "" }, { "docid": "b9c74367d813c8b821505bfea2c5946e", "text": "This paper presents correct algorithms for answering the following two questions; (i) Does there exist a causal explanation con­ sistent with a set of background knowledge which explains all of the observed indepen­ dence facts in a sample? (ii) Given that there is such a causal explanation what are the causal relationships common to every such", "title": "" }, { "docid": "14c786d87fc06ab85ad41f6f6c30db21", "text": "When an attacker tries to penetrate the network, there are many defensive systems, including intrusion detection systems (IDSs). Most IDSs are capable of detecting many attacks, but can not provide a clear idea to the analyst because of the huge number of false alerts generated by these systems. This weakness in the IDS has led to the emergence of many methods in which to deal with these alerts, minimize them and highlight the real attacks. It has come to a stage to take a stock of the research results a comprehensive view so that further research in this area will be motivated objectively to fulfill the gaps", "title": "" }, { "docid": "0957b0617894561ea6d6e85c43cfb933", "text": "We consider the online metric matching problem. In this prob lem, we are given a graph with edge weights satisfying the triangl e inequality, andk vertices that are designated as the right side of the matchin g. Over time up tok requests arrive at an arbitrary subset of vertices in the gra ph and each vertex must be matched to a right side vertex immediately upon arrival. A vertex cannot be rematched to another vertex once it is matched. The goal is to minimize the total weight of the matching. We give aO(log k) competitive randomized algorithm for the problem. This improves upon the best known guarantee of O(log k) due to Meyerson, Nanavati and Poplawski [19]. It is well known that no deterministic al gorithm can have a competitive less than 2k − 1, and that no randomized algorithm can have a competitive ratio of less than l k.", "title": "" }, { "docid": "e245b1444428e4187737545408dacb72", "text": "Technology offers great potential to reshape our relationship to work, but the form of that reshaping should not be allowed to happen haphazardly. As work and technology use become increasingly intertwined, a number of issues deserve re-examination. Some of these relate to work intensification and/or longer hours and possible exchange for flexibility. Recent research on use of employer-supplied smart phones offers some insight into employee perceptions of why the company supplies this technology and whether there is risk to declining the opportunity. Because dangers are more readily apparent, current limitations of technology use have been approached more often through laws related to driving than through general policies or regulation about the work itself. However, there are other concerns that may translate into employer liability beyond the possibility of car accidents. A variety of these concerns are covered in this article, along with related suggestion for actions by employers, their advisory groups, technology companies, government and employees themselves.", "title": "" }, { "docid": "2271347e3b04eb5a73466aecbac4e849", "text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method", "title": "" }, { "docid": "931c392507d6d7bccdc65d27ef2bbcab", "text": "Language processing becomes more and more important in multimedia processing. Although embedded vector representations of words offer impressive performance on many natural language processing (NLP) applications, the information of ordered input sequences is lost to some extent if only context-based samples are used in the training. For further performance improvement, two new post-processing techniques, called post-processing via variance normalization (PVN) and post-processing via dynamic embedding (PDE), are proposed in this work. The PVN method normalizes the variance of principal components of word vectors, while the PDE method learns orthogonal latent variables from ordered input sequences. The PVN and the PDE methods can be integrated to achieve better performance. We apply these post-processing techniques to several popular word embedding methods to yield their post-processed representations. Extensive experiments are conducted to demonstrate the effectiveness of the proposed post-processing techniques.", "title": "" }, { "docid": "c34d4d0e3dcf52aba737a87877d55f49", "text": "Building Information Modeling is based on the idea of the continuous use of digital building models throughout the entire lifecycle of a built facility, starting from the early conceptual design and detailed design phases, to the construction phase, and the long phase of operation. BIM significantly improves information flow between stakeholders involved at all stages, resulting in an increase in efficiency by reducing the laborious and error-prone manual re-entering of information that dominates conventional paper-based workflows. Thanks to its many advantages, BIM is already practiced in many construction projects throughout the entire world. However, the fragmented nature of the construction industry still impedes its more widespread use. Government initiatives around the world play an important role in increasing BIM adoption: as the largest client of the construction industry in many countries, the state has the power to significantly change its work practices. This chapter discusses the motivation for applying BIM, offers a detailed definition of BIM along with an overview of typical use cases, describes the common BIM maturity grades and reports on BIM adoption levels in various countries around the globe. A. Borrmann ( ) Chair of Computational Modeling and Simulation, Technical University of Munich, München, Germany e-mail: andre.borrmann@tum.de M. König Chair of Computing in Engineering, Ruhr University Bochum, Bochum, Germany e-mail: koenig@inf.bi.rub.de C. Koch Chair of Intelligent Technical Design, Bauhaus-Universität Weimar, Weimar, Germany e-mail: c.koch@uni-weimar.de J. Beetz Chair of Design Computation, RWTH Aachen University, Aachen, Germany e-mail: j.beetz@caad.arch.rwth-aachen.de © Springer International Publishing AG, part of Springer Nature 2018 A. Borrmann et al. (eds.), Building Information Modeling, https://doi.org/10.1007/978-3-319-92862-3_1 1 2 A. Borrmann et al. 1.1 Building Information Modeling: Why? In the last decade, digitalization has transformed a wide range of industrial sectors, resulting in a tremendous increase in productivity, product quality and product variety. In the Architecture, Engineering, Construction (AEC) industry, digital tools are increasingly adopted for designing, constructing and operating buildings and infrastructure assets. However, the continuous use of digital information along the entire process chain falls significantly behind other industry domains. All too often, valuable information is lost because information is still predominantly handed over in the form of drawings, either as physical printed plots on paper or in a digital but limited format. Such disruptions in the information flow occur across the entire lifecycle of a built facility: in its design, construction and operation phases as well as in the very important handovers between these phases. The planning and realization of built facilities is a complex undertaking involving a wide range of stakeholders from different fields of expertise. For a successful construction project, a continuous reconciliation and intense exchange of information among these stakeholders is necessary. Currently, this typically involves the handover of technical drawings of the construction project in graphical manner in the form of horizontal and vertical sections, views and detail drawings. The software used to create these drawings imitate the centuries-old way of working using a drawing board. However, line drawings cannot be comprehensively understood by computers. The information they contain can only be partially interpreted and processed by computational methods. Basing the information flow on drawings alone therefore fails to harness the great potential of information technology for supporting project management and building operation. A key problem is that the consistency of the diverse technical drawings can only be checked manually. This is a potentially massive source of errors, particularly if we take into account that the drawings are typically created by experts from different design disciplines and across multiple companies. Design changes are particularly challenging: if they are not continuously tracked and relayed to all related plans, inconsistencies can easily arise and often remain undiscovered until the actual construction – where they then incur significant extra costs for ad-hoc solutions on site. In conventional practice, design changes are marked only by means of revision clouds in the drawings, which can be hard to detect and ambiguous. The limited information depth of technical drawings also has a significant drawback in that information on the building design cannot be directly used by downstream applications for any kind of analysis, calculation and simulation, but must be re-entered manually which again requires unnecessary additional work and is a further source of errors. The same holds true for the information handover to the building owner after the construction is finished. He must invest considerable effort into extracting the required information for operating the building from the drawings and documents and enter it into a facility management system. At each of 1 Building Information Modeling: Why? What? How? 3 Conceptual Design Construction Detailed Design Operation Time Conventional workflows Digital workflows Information loss Project information Fig. 1.1 Loss of information caused by disruptions in the digital information flow. (Based on Eastman et al. 2008) these information exchange points, data that was once available in digital form is lost and has to be laboriously re-created (Fig. 1.1). This is where Building Information Modeling comes into play. By applying the BIM method, a much more profound use of computer technology in the design, engineering, construction and operation of built facilities is realized. Instead of recording information in drawings, BIM stores, maintains and exchanges information using comprehensive digital representations: the building information models. This approach dramatically improves the coordination of the design activities, the integration of simulations, the setup and control of the construction process, as well as the handover of building information to the operator. By reducing the manual re-entering of data to a minimum and enabling the consequent re-use of digital information, laborious and error-prone work is avoided, which in turn results in an increase in productivity and quality in construction projects. Other industry sectors, such as the automotive industry, have already undergone the transition to digitized, model-based product development and manufacturing which allowed them to achieve significant efficiency gains (Kagermann 2015). The Architecture Engineering and Construction (AEC) industry, however, has its own particularly challenging boundary conditions: first and foremost, the process and value creation chain is not controlled by one company, but is dispersed across a large number of enterprises including architectural offices, engineering consultancies, and construction firms. These typically cooperate only for the duration of an individual construction project and not for a longer period of time. Consequently, there are a large number of interfaces in the ad-hoc network of companies where digital information has to be handed over. As these information flows must be supervised and controlled by a central instance, the onus is on the building owner to specify and enforce the use of Building Information Modeling. 4 A. Borrmann et al. 1.2 Building Information Modeling: What? A Building Information Model is a comprehensive digital representation of a built facility with great information depth. It typically includes the three-dimensional geometry of the building components at a defined level of detail. In addition, it also comprises non-physical objects, such as spaces and zones, a hierarchical project structure, or schedules. Objects are typically associated with a well-defined set of semantic information, such as the component type, materials, technical properties, or costs, as well as the relationships between the components and other physical or logical entities (Fig. 1.2). The term Building Information Modeling (BIM) consequently describes both the process of creating such digital building models as well as the process of maintaining, using and exchanging them throughout the entire lifetime of the built facility (Fig. 1.3). The US National Building Information Modeling Standard defines BIM as follows (NIBS 2012): Building Information Modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. Fig. 1.2 A BIM model comprises both the 3D geometry of each building element as well as a rich set of semantic information provided by attributes and relationships 1 Building Information Modeling: Why? What? How? 5 Construction Detailed Design Operation Conceptual Design Modification Demolition Facility Management, Maintenance, Repair Cost estimation Design Options Progress Monitoring Simulations and Analyses Logistics Process Simulation Coordination Visualization Spatial Program", "title": "" }, { "docid": "cdaef1fd6b2dcc8267c3a778761113bd", "text": "This paper presents a new control architecture for fast, accurate force control of antagonistic pairs of shape memory alloy wires. The main components are: a differential-mode controller, which controls the output force, an anti-slack mechanism, a rapid-heating mechanism and an anti-overload mechanism. The closed-loop response is fast and accurate, even in the presence of large external motion disturbances. There is no sign of limit cycles; and the performance is unaffected by large load inertias. This paper also presents an architecture for position control, in which a position feedback loop is added to the force control architecture. Experimental results show force control accuracies as high as 1mN in a ±3N range, force output rates as high as 50Ns−1, and highly accurate position control with steady-state errors below the resolution of the position encoder.", "title": "" }, { "docid": "8228886ce1093cd3e3f69cdd7bc6173e", "text": "Evolutionary-biological reasoning suggests that individuals should be differentially susceptible to environmental influences, with some people being not just more vulnerable than others to the negative effects of adversity, as the prevailing diathesis-stress view of psychopathology (and of many environmental influences) maintains, but also disproportionately susceptible to the beneficial effects of supportive and enriching experiences (or just the absence of adversity). Evidence consistent with the proposition that individuals differ in plasticity is reviewed. The authors document multiple instances in which (a) phenotypic temperamental characteristics, (b) endophenotypic attributes, and (c) specific genes function less like \"vulnerability factors\" and more like \"plasticity factors,\" thereby rendering some individuals more malleable or susceptible than others to both negative and positive environmental influences. Discussion focuses upon limits of the evidence, statistical criteria for distinguishing differential susceptibility from diathesis stress, potential mechanisms of influence, and unknowns in the differential-susceptibility equation.", "title": "" }, { "docid": "954660a163fc8453368a6863d1c3fd85", "text": "The application potential of very high resolution (VHR) remote sensing imagery has been boosted by recent developments in the data acquisition and processing ability of aerial photogrammetry. However, shadows in images contribute to problems such as incomplete spectral information, lower intensity brightness, and fuzzy boundaries, which seriously affect the efficiency of the image interpretation. In this paper, to address these issues, a simple and automatic method of shadow detection is presented. The proposed method combines the advantages of the property-based and geometric-based methods to automatically detect the shadowed areas in VHR imagery. A geometric model of the scene and the solar position are used to delineate the shadowed and non-shadowed areas in the VHR image. A matting method is then applied to the image to refine the shadow mask. Different types of shadowed aerial orthoimages were used to verify the effectiveness of the proposed shadow detection method, and the results were compared with the results obtained by two state-of-the-art methods. The overall accuracy of the proposed method on the three tests was around 90%, confirming the effectiveness and robustness of the new method for detecting fine shadows, without any human input. The proposed method also performs better in detecting shadows in areas with water than the other two methods.", "title": "" }, { "docid": "5c74348ce0028786990b4ca39b1e858d", "text": "The terminology Internet of Things (IoT) refers to a future where every day physical objects are connected by the Internet in one form or the other, but outside the traditional desktop realm. The successful emergence of the IoT vision, however, will require computing to extend past traditional scenarios involving portables and smart-phones to the connection of everyday physical objects and the integration of intelligence with the environment. Subsequently, this will lead to the development of new computing features and challenges. The main purpose of this paper, therefore, is to investigate the features, challenges, and weaknesses that will come about, as the IoT becomes reality with the connection of more and more physical objects. Specifically, the study seeks to assess emergent challenges due to denial of service attacks, eavesdropping, node capture in the IoT infrastructure, and physical security of the sensors. We conducted a literature review about IoT, their features, challenges, and vulnerabilities. The methodology paradigm used was qualitative in nature with an exploratory research design, while data was collected using the desk research method. We found that, in the distributed form of architecture in IoT, attackers could hijack unsecured network devices converting them into bots to attack third parties. Moreover, attackers could target communication channels and extract data from the information flow. Finally, the perceptual layer in distributed IoT architecture is also found to be vulnerable to node capture attacks, including physical capture, brute force attack, DDoS attacks, and node privacy leaks.", "title": "" } ]
scidocsrr
6b9ce507f12ba3036f9c580491e845e3
TLTD: A Testing Framework for Learning-Based IoT Traffic Detection Systems
[ { "docid": "67e85e8b59ec7dc8b0019afa8270e861", "text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.", "title": "" }, { "docid": "17611b0521b69ad2b22eeadc10d6d793", "text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "title": "" }, { "docid": "580d83a0e627daedb45fe55e3f9b6883", "text": "With near exponential growth predicted in the number of Internet of Things (IoT) based devices within networked systems there is need of a means of providing their flexible and secure integration. Software Defined Networking (SDN) is a concept that allows for the centralised control and configuration of network devices, and also provides opportunities for the dynamic control of network traffic. This paper proposes the use of an SDN gateway as a distributed means of monitoring the traffic originating from and directed to IoT based devices. This gateway can then both detect anomalous behaviour and perform an appropriate response (blocking, forwarding, or applying Quality of Service). Initial results demonstrate that, while the addition of the attack detection functionality has an impact on the number of flow installations possible per second, it can successfully detect and block TCP and ICMP flood based attacks.", "title": "" }, { "docid": "11a69c06f21e505b3e05384536108325", "text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "title": "" } ]
[ { "docid": "f400ca4fe8fc5c684edf1ae60e026632", "text": "Driverless vehicles will be common on the road in a short time. They will have many impacts on the global transport market trends. One of the remarkable driverless vehicles impacts will be the laying aside of rail systems, because of several reasons, that is to say traffic congestions will be no more a justification for rail, rail will not be the best answer for disableds, air pollution of cars are more or less equal to air pollution of trains and the last but not least reason is that driverless cars are safer than trains.", "title": "" }, { "docid": "6171a708ea6470b837439ad23af90dff", "text": "Cardiovascular diseases represent a worldwide relevant socioeconomical problem. Cardiovascular disease prevention relies also on lifestyle changes, including dietary habits. The cardioprotective effects of several foods and dietary supplements in both animal models and in humans have been explored. It was found that beneficial effects are mainly dependent on antioxidant and anti-inflammatory properties, also involving modulation of mitochondrial function. Resveratrol is one of the most studied phytochemical compounds and it is provided with several benefits in cardiovascular diseases as well as in other pathological conditions (such as cancer). Other relevant compounds are Brassica oleracea, curcumin, and berberine, and they all exert beneficial effects in several diseases. In the attempt to provide a comprehensive reference tool for both researchers and clinicians, we summarized in the present paper the existing literature on both preclinical and clinical cardioprotective effects of each mentioned phytochemical. We structured the discussion of each compound by analyzing, first, its cellular molecular targets of action, subsequently focusing on results from applications in both ex vivo and in vivo models, finally discussing the relevance of the compound in the context of human diseases.", "title": "" }, { "docid": "94316059aba51baedd5662e7246e23c1", "text": "The increased need of content based image retrieval technique can be found in a number of different domains such as Data Mining, Education, Medical Imaging, Crime Prevention, Weather forecasting, Remote Sensing and Management of Earth Resources. This paper presents the content based image retrieval, using features like texture and color, called WBCHIR (Wavelet Based Color Histogram Image Retrieval).The texture and color features are extracted through wavelet transformation and color histogram and the combination of these features is robust to scaling and translation of objects in an image. The proposed system has demonstrated a promising and faster retrieval method on a WANG image database containing 1000 general-purpose color images. The performance has been evaluated by comparing with the existing systems in the literature.", "title": "" }, { "docid": "6560a704d5f8022193b60dd3ad213d5a", "text": "Despite web access on mobile devices becoming commonplace, users continue to experience poor web performance on these devices. Traditional approaches for improving web performance (e.g., compression, SPDY, faster browsers) face an uphill battle due to the fundamentally conflicting trends in user expectations of lower load times and richer web content. Embracing the reality that page load times will continue to be higher than user tolerance limits for the foreseeable future, we ask: How can we deliver the best possible user experience? To this end, we present KLOTSKI, a system that prioritizes the content most relevant to a user’s preferences. In designing KLOTSKI, we address several challenges in: (1) accounting for inter-resource dependencies on a page; (2) enabling fast selection and load time estimation for the subset of resources to be prioritized; and (3) developing a practical implementation that requires no changes to websites. Across a range of user preference criteria, KLOTSKI can significantly improve the user experience relative to native websites.", "title": "" }, { "docid": "7e40c7145f4613f12e7fc13646f3927c", "text": "One strategy for intelligent agents in order to reach their goals is to plan their actions in advance. This can be done by simulating how the agent’s actions affect the environment and how it evolves independently of the agent. For this simulation, a model of the environment is needed. However, the creation of this model might be labor-intensive and it might be computational complex to evaluate during simulation. That is why, we suggest to equip an intelligent agent with a learned intuition about the dynamics of its environment by utilizing the concept of intuitive physics. To demonstrate our approach, we used an agent that can freely move in a two dimensional floor plan. It has to collect moving targets while avoiding the collision with static and dynamic obstacles. In order to do so, the agent plans its actions up to a defined planning horizon. The performance of our agent, which intuitively estimates the dynamics of its surrounding objects based on artificial neural networks, is compared to an agent which has a physically exact model of the world and one that acts randomly. The evaluation shows comparatively good results for the intuition based agent considering it uses only a quarter of the computation time in comparison to the agent with a physically exact model.", "title": "" }, { "docid": "091d9afe87fa944548b9f11386112d6e", "text": "In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90% detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.", "title": "" }, { "docid": "f58a1a0d8cc0e2c826c911be4451e0df", "text": "From an accessibility perspective, voice-controlled, home-based intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screen reader output. To examine the accessibility of off-the-shelf IPAs (e.g., Amazon Echo) and to understand how users with disabilities are making use of these devices, we conducted two exploratory studies. The first, broader study is a content analysis of 346 Amazon Echo reviews that include users with disabilities, while the second study more specifically focuses on users with visual impairments, through interviews with 16 current users of home-based IPAs. Findings show that, although some accessibility challenges exist, users with a range of disabilities are using the Amazon Echo, including for unexpected cases such as speech therapy and support for caregivers. Richer voice-based applications and solutions to support discoverability would be particularly useful to users with visual impairments. These findings should inform future work on accessible voice-based IPAs.", "title": "" }, { "docid": "374674cc8a087d31ee2c801f7e49aa8d", "text": "Two biological control agents, Bacillus subtilis AP-01 (Larminar(™)) and Trichoderma harzianum AP-001 (Trisan(™)) alone or/in combination were investigated in controlling three tobacco diseases, including bacterial wilt (Ralstonia solanacearum), damping-off (Pythium aphanidermatum), and frogeye leaf spot (Cercospora nicotiana). Tests were performed in greenhouse by soil sterilization prior to inoculation of the pathogens. Bacterial-wilt and damping off pathogens were drenched first and followed with the biological control agents and for comparison purposes, two chemical fungicides. But for frogeye leaf spot, which is an airborne fungus, a spraying procedure for every treatment including a chemical fungicide was applied instead of drenching. Results showed that neither B. subtilis AP-01 nor T harzianum AP-001 alone could control the bacterial wilt, but when combined, their controlling capabilities were as effective as a chemical treatment. These results were also similar for damping-off disease when used in combination. In addition, the combined B. subtilis AP-01 and T. harzianum AP-001 resulted in a good frogeye leaf spot control, which was not significantly different from the chemical treatment.", "title": "" }, { "docid": "32b860121b49bd3a61673b3745b7b1fd", "text": "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.", "title": "" }, { "docid": "f70ce9d95ac15fc0800b8e6ac60247cb", "text": "Many systems for the parallel processing of big data are available today. Yet, few users can tell by intuition which system, or combination of systems, is \"best\" for a given workflow. Porting workflows between systems is tedious. Hence, users become \"locked in\", despite faster or more efficient systems being available. This is a direct consequence of the tight coupling between user-facing front-ends that express workflows (e.g., Hive, SparkSQL, Lindi, GraphLINQ) and the back-end execution engines that run them (e.g., MapReduce, Spark, PowerGraph, Naiad).\n We argue that the ways that workflows are defined should be decoupled from the manner in which they are executed. To explore this idea, we have built Musketeer, a workflow manager which can dynamically map front-end workflow descriptions to a broad range of back-end execution engines.\n Our prototype maps workflows expressed in four high-level query languages to seven different popular data processing systems. Musketeer speeds up realistic workflows by up to 9x by targeting different execution engines, without requiring any manual effort. Its automatically generated back-end code comes within 5%--30% of the performance of hand-optimized implementations.", "title": "" }, { "docid": "11a2882124e64bd6b2def197d9dc811a", "text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.", "title": "" }, { "docid": "b44ebb850ce2349dddc35bbf9a01fb8a", "text": "Automatically assessing emotional valence in human speech has historically been a difficult task for machine learning algorithms. The subtle changes in the voice of the speaker that are indicative of positive or negative emotional states are often “overshadowed” by voice characteristics relating to emotional intensity or emotional activation. In this work we explore a representation learning approach that automatically derives discriminative representations of emotional speech. In particular, we investigate two machine learning strategies to improve classifier performance: (1) utilization of unlabeled data using a deep convolutional generative adversarial network (DCGAN), and (2) multitask learning. Within our extensive experiments we leverage a multitask annotated emotional corpus as well as a large unlabeled meeting corpus (around 100 hours). Our speaker-independent classification experiments show that in particular the use of unlabeled data in our investigations improves performance of the classifiers and both fully supervised baseline approaches are outperformed considerably. We improve the classification of emotional valence on a discrete 5-point scale to 43.88% and on a 3-point scale to 49.80%, which is competitive to state-of-the-art performance.", "title": "" }, { "docid": "ccaba0b30fc1a0c7d55d00003b07725a", "text": "We collect a corpus of 1554 online news articles from 23 RSS feeds and analyze it in terms of controversy and sentiment. We use several existing sentiment lexicons and lists of controversial terms to perform a number of statistical analyses that explore how sentiment and controversy are related. We conclude that the negative sentiment and controversy are not necessarily positively correlated as has been claimed in the past. In addition, we apply an information theoretic approach and suggest that entropy might be a good predictor of controversy.", "title": "" }, { "docid": "6a2e5831f2a2e1625be2bfb7941b9d1b", "text": "Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP) faced. The most important features in PDP are: 1) supporting for public, unlimited numbers of times of verification; 2) supporting for dynamic data update; 3) efficiency of storage space and computing. In mobile cloud computing, mobile end-users also need the PDP service. However, the computing workloads and storage burden of client in existing PDP schemes are too heavy to be directly used by the resource-constrained mobile devices. To solve this problem, with the integration of the trusted computing technology, this paper proposes a novel public PDP scheme, in which the trusted third-party agent (TPA) takes over most of the calculations from the mobile end-users. By using bilinear signature and Merkle hash tree (MHT), the scheme aggregates the verification tokens of the data file into one small signature to reduce communication and storage burden. MHT is also helpful to support dynamic data update. In our framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM) chips, and the needed computing workload and storage space is fit for mobile devices. Our scheme realizes provable secure storage service for resource-constrained mobile devices in mobile cloud computing.", "title": "" }, { "docid": "53e668839e9d7e065dc7864830623790", "text": "Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided.", "title": "" }, { "docid": "9381ba0001262dd29d7ca74a98a56fc7", "text": "Despite several advances in information retrieval systems and user interfaces, the specification of queries over text-based document collections remains a challenging problem. Query specification with keywords is a popular solution. However, given the widespread adoption of gesture-driven interfaces such as multitouch technologies in smartphones and tablets, the lack of a physical keyboard makes query specification with keywords inconvenient. We present BinGO, a novel gestural approach to querying text databases that allows users to refine their queries using a swipe gesture to either \"like\" or \"dislike\" candidate documents as well as express the reasons they like or dislike a document by swiping through automatically generated \"reason bins\". Such reasons refine a user's query with additional keywords. We present an online and efficient bin generation algorithm that presents reason bins at gesture articulation. We motivate and describe BinGo's unique interface design choices. Based on our analysis and user studies, we demonstrate that query specification by swiping through reason bins is easy and expressive.", "title": "" }, { "docid": "8d4c66f9e12c1225df1e79628d666702", "text": "Recently, wavelet transforms have gained very high attention in many fields and applications such as physics, engineering, signal processing, applied mathematics and statistics. In this paper, we present the advantage of wavelet transforms in forecasting financial time series data. Amman stock market (Jordan) was selected as a tool to show the ability of wavelet transform in forecasting financial time series, experimentally. This article suggests a novel technique for forecasting the financial time series data, based on Wavelet transforms and ARIMA model. Daily return data from 1993 until 2009 is used for this study. 316 S. Al Wadi et al", "title": "" }, { "docid": "1977e7813b15ffb3a4238f3ed40f0e1f", "text": "Despite the existence of standard protocol, many stabilization centers (SCs) continue to experience high mortality of children receiving treatment for severe acute malnutrition. Assessing treatment outcomes and identifying predictors may help to overcome this problem. Therefore, a 30-month retrospective cohort study was conducted among 545 randomly selected medical records of children <5 years of age admitted to SCs in Gedeo Zone. Data was entered by Epi Info version 7 and analyzed by STATA version 11. Cox proportional hazards model was built by forward stepwise procedure and compared by the likelihood ratio test and Harrell's concordance, and fitness was checked by Cox-Snell residual plot. During follow-up, 51 (9.3%) children had died, and 414 (76%) and 26 (4.8%) children had recovered and defaulted (missed follow-up for 2 consecutive days), respectively. The survival rates at the end of the first, second and third weeks were 95.3%, 90% and 85%, respectively, and the overall mean survival time was 79.6 days. Age <24 months (adjusted hazard ratio [AHR] =2.841, 95% confidence interval [CI] =1.101-7.329), altered pulse rate (AHR =3.926, 95% CI =1.579-9.763), altered temperature (AHR =7.173, 95% CI =3.05-16.867), shock (AHR =3.805, 95% CI =1.829-7.919), anemia (AHR =2.618, 95% CI =1.148-5.97), nasogastric tube feeding (AHR =3.181, 95% CI =1.18-8.575), hypoglycemia (AHR =2.74, 95% CI =1.279-5.87) and treatment at hospital stabilization center (AHR =4.772, 95% CI =1.638-13.9) were independent predictors of mortality. The treatment outcomes and incidence of death were in the acceptable ranges of national and international standards. Intervention to further reduce deaths has to focus on young children with comorbidities and altered general conditions.", "title": "" }, { "docid": "1839d9e6ef4bad29381105f0a604b731", "text": "Our focus is on the effects that dated ideas about the nature of science (NOS) have on curriculum, instruction and assessments. First we examine historical developments in teaching about NOS, beginning with the seminal ideas of James Conant. Next we provide an overview of recent developments in philosophy and cognitive sciences that have shifted NOS characterizations away from general heuristic principles toward cognitive and social elements. Next, we analyze two alternative views regarding ‘explicitly teaching’ NOS in pre-college programs. Version 1 is grounded in teachers presenting ‘Consensus-based Heuristic Principles’ in science lessons and activities. Version 2 is grounded in learners experience of ‘Building and Refining Model-Based Scientific Practices’ in critique and communication enactments that occur in longer immersion units and learning progressions. We argue that Version 2 is to be preferred over Version 1 because it develops the critical epistemic cognitive and social practices that scientists and science learners use when (1) developing and evaluating scientific evidence, explanations and knowledge and (2) critiquing and communicating scientific ideas and information; thereby promoting science literacy. 1 NOS and Science Education When and how did knowledge about science, as opposed to scientific content knowledge, become a targeted outcome of science education? From a US perspective, the decades of interest are the 1940s and 1950s when two major post-war developments in science education policy initiatives occurred. The first, in post secondary education, was the GI Bill An earlier version of this paper was presented as a plenary session by the first author at the ‘How Science Works—And How to Teach It’ workshop, Aarhus University, 23–25 June, 2011, Denmark. R. A. Duschl (&) The Pennsylvania State University, University Park, PA, USA e-mail: rad19@psu.edu R. Grandy Rice University, Houston, TX, USA 123 Sci & Educ DOI 10.1007/s11191-012-9539-4", "title": "" }, { "docid": "e289f0f11ee99c57ede48988cc2dbd5c", "text": "Generative Adversarial Networks (GANs) are becoming popular choices for unsupervised learning. At the same time there is a concerted effort in the machine learning community to expand the range of tasks in which learning can be applied as well as to utilize methods from other disciplines to accelerate learning. With this in mind, in the current work we suggest ways to enforce given constraints in the output of a GAN both for interpolation and extrapolation. The two cases need to be treated differently. For the case of interpolation, the incorporation of constraints is built into the training of the GAN. The incorporation of the constraints respects the primary gametheoretic setup of a GAN so it can be combined with existing algorithms. However, it can exacerbate the problem of instability during training that is well-known for GANs. We suggest adding small noise to the constraints as a simple remedy that has performed well in our numerical experiments. The case of extrapolation (prediction) is more involved. First, we employ a modified interpolation training process that uses noisy data but does not necessarily enforce the constraints during training. Second, the resulting modified interpolator is used for extrapolation where the constraints are enforced after each step through projection on the space of constraints.", "title": "" } ]
scidocsrr
27b6a1b43e2f004b195043c4a356d2f2
BLENDER: Enabling Local Search with a Hybrid Differential Privacy Model
[ { "docid": "dbbd9f6440ee0c137ee0fb6a4aadba38", "text": "In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.", "title": "" }, { "docid": "89e51b29bf1486795d0b70c5817b6a75", "text": "In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.", "title": "" } ]
[ { "docid": "7eebeb133a9881e69bf3c367b9e20751", "text": "Advanced driver assistance systems or highly automated driving systems for lane change maneuvers are expected to enhance highway traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually progress to highly automated highway driving, the task of automatically determine if, when, and how to perform a lane change maneuver, is essential. This paper thereby presents a low-complexity lane change maneuver algorithm which determines whether a lane change maneuver is desirable, and if so, selects an appropriate inter-vehicle traffic gap and time instance to perform the maneuver, and calculates the corresponding longitudinal and lateral control trajectory. The ability of the proposed lane change maneuver algorithm to make appropriate maneuver decisions and generate smooth and safe lane change trajectories in various traffic situations is demonstrated by simulation and experimental results.", "title": "" }, { "docid": "497fcf32281c8e9555ac975a3de45a6a", "text": "This paper presents the framework, rules, games, controllers, and results of the first General Video Game Playing Competition, held at the IEEE Conference on Computational Intelligence and Games in 2014. The competition proposes the challenge of creating controllers for general video game play, where a single agent must be able to play many different games, some of them unknown to the participants at the time of submitting their entries. This test can be seen as an approximation of general artificial intelligence, as the amount of game-dependent heuristics needs to be severely limited. The games employed are stochastic real-time scenarios (where the time budget to provide the next action is measured in milliseconds) with different winning conditions, scoring mechanisms, sprite types, and available actions for the player. It is a responsibility of the agents to discover the mechanics of each game, the requirements to obtain a high score and the requisites to finally achieve victory. This paper describes all controllers submitted to the competition, with an in-depth description of four of them by their authors, including the winner and the runner-up entries of the contest. The paper also analyzes the performance of the different approaches submitted, and finally proposes future tracks for the competition.", "title": "" }, { "docid": "0db1a54964702697ca08e40d12949771", "text": "Synchronous and fixed-speed induction generators release the kinetic energy of their rotating mass when the power system frequency is reduced. In the case of doubly fed induction generator (DFIG)-based wind turbines, their control system operates to apply a restraining torque to the rotor according to a predetermined curve with respect to the rotor speed. This control system is not based on the power system frequency and there is negligible contribution to the inertia of the power system. A DFIG control system was modified to introduce inertia response to the DFIG wind turbine. Simulations were used to show that with the proposed control system, the DFIG wind turbine can supply considerably greater kinetic energy than a fixed-speed wind turbine.", "title": "" }, { "docid": "5f351dc1334f43ce1c80a1e78581d0f9", "text": "Based on keypoints extracted as salient image patches, an image can be described as a \"bag of visual words\" and this representation has been used in scene classification. The choice of dimension, selection, and weighting of visual words in this representation is crucial to the classification performance but has not been thoroughly studied in previous work. Given the analogy between this representation and the bag-of-words representation of text documents, we apply techniques used in text categorization, including term weighting, stop word removal, feature selection, to generate image representations that differ in the dimension, selection, and weighting of visual words. The impact of these representation choices to scene classification is studied through extensive experiments on the TRECVID and PASCAL collection. This study provides an empirical basis for designing visual-word representations that are likely to produce superior classification performance.", "title": "" }, { "docid": "2a91eeedbb43438f9ed449e14d93ce8e", "text": "In this paper, we introduce the concept of green noise—the midfrequency component of white noise—and its advantages over blue noise for digital halftoning. Unlike blue-noise dither patterns, which are composed exclusively of isolated pixels, green-noise dither patterns are composed of pixel-clusters making them less susceptible to image degradation from nonideal printing artifacts such as dot-gain. Although they are not the only techniques which generate clustered halftones, error-diffusion with output-dependent feedback and variations based on filter weight perturbation are shown to be good generators of green noise, thereby allowing for tunable coarseness. Using statistics developed for blue noise, we closely examine the spectral content of resulting dither patterns. We introduce two spatial-domain statistics for analyzing the spatial arrangement of pixels in aperiodic dither patterns, because greennoise patterns may be anisotropic, and therefore spectral statistics based on radial averages may be inappropriate for the study of these patterns.", "title": "" }, { "docid": "60664c058868f08a67d14172d87a4756", "text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.", "title": "" }, { "docid": "c777d2fcc2a27ca17ea82d4326592948", "text": "The existing methods for image captioning usually train the language model under the cross entropy loss, which results in the exposure bias and inconsistency of evaluation metric. Recent research has shown these two issues can be well addressed by policy gradient method in reinforcement learning domain attributable to its unique capability of directly optimizing the discrete and non-differentiable evaluation metric. In this paper, we utilize reinforcement learning method to train the image captioning model. Specifically, we train our image captioning model to maximize the overall reward of the sentences by adopting the temporal-difference (TD) learning method, which takes the correlation between temporally successive actions into account. In this way, we assign different values to different words in one sampled sentence by a discounted coefficient when back-propagating the gradient with the REINFORCE algorithm, enabling the correlation between actions to be learned. Besides, instead of estimating a “baseline” to normalize the rewards with another network, we utilize the reward of another Monte-Carlo sample as the “baseline” to avoid high variance. We show that our proposed method can improve the quality of generated captions and outperforms the state-of-the-art methods on the benchmark dataset MS COCO in terms of seven evaluation metrics.", "title": "" }, { "docid": "ef6160d304908ea87287f2071dea5f6d", "text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.", "title": "" }, { "docid": "ccc4b8f75e39488068293540aeb508e2", "text": "We present a novel approach to sketching 2D curves with minimally varying curvature as piecewise clothoids. A stable and efficient algorithm fits a sketched piecewise linear curve using a number of clothoid segments with G2 continuity based on a specified error tolerance. Further, adjacent clothoid segments can be locally blended to result in a G3 curve with curvature that predominantly varies linearly with arc length. We also handle intended sharp corners or G1 discontinuities, as independent rotations of clothoid pieces. Our formulation is ideally suited to conceptual design applications where aesthetic fairness of the sketched curve takes precedence over the precise interpolation of geometric constraints. We show the effectiveness of our results within a system for sketch-based road and robot-vehicle path design, where clothoids are already widely used.", "title": "" }, { "docid": "75bb8497138ef8e0bea1a56f7443791e", "text": "Generative communication is the basis of a new distributed programming langauge that is intended for systems programming in distributed settings generally and on integrated network computers in particular. It differs from previous interprocess communication models in specifying that messages be added in tuple-structured form to the computation environment, where they exist as named, independent entities until some process chooses to receive them. Generative communication results in a number of distinguishing properties in the new language, Linda, that is built around it. Linda is fully distributed in space and distributed in time; it allows distributed sharing, continuation passing, and structured naming. We discuss these properties and their implications, then give a series of examples. Linda presents novel implementation problems that we discuss in Part II. We are particularly concerned with implementation of the dynamic global name space that the generative communication model requires.", "title": "" }, { "docid": "3e14ca940db87b6d6be7017704be13e1", "text": "Digital Twin models are computerized clones of physical assets that can be used for in-depth analysis. Industrial production lines tend to have multiple sensors to generate near real-time status information for production. Industrial Internet of Things datasets are difficult to analyze and infer valuable insights such as points of failure, estimated overhead. etc. In this paper we introduce a simple way of formalizing knowledge as digital twin models coming from sensors in industrial production lines. We present a way on to extract and infer knowledge from large scale production line data, and enhance manufacturing process management with reasoning capabilities, by introducing a semantic query mechanism. Our system primarily utilizes a graph-based query language equivalent to conjunctive queries and has been enriched with inference rules.", "title": "" }, { "docid": "6762134c344053fb167ea286e21995f3", "text": "Image processing techniques are widely used in the domain of medical sciences for detecting various diseases, infections, tumors, cell abnormalities and various cancers. Detecting and curing a dise ase on time is very important in the field of medicine for protecting and saving human life. Mostly in case of high severity diseases where the mortality rates are more, the waiting time of patients for their reports such as blood test, MRI is more. The time taken for generation of any of the test is from 1-10 days. In high risk diseases like Hepatitis B, it is recommended that the patient’s waiting time should be as less as possible and the treatment should be started immediately. The current system used by the pathologists for identification of blood parameters is costly and the time involved in generation of the reports is also more sometimes leading to loss of patient’s life. Also the pathological tests are expensive, which are sometimes not affordable by the patient. This paper deals with an image processing technique used for detecting the abnormalities of blood cells in less time. The proposed technique also helps in segregating the blood cells in different categories based on the form factor.", "title": "" }, { "docid": "c7857bde224ef6252602798c349beb44", "text": "Context Several studies show that people with low health literacy skills have poorer health-related knowledge and comprehension. Contribution This updated systematic review of 96 studies found that low health literacy is associated with poorer ability to understand and follow medical advice, poorer health outcomes, and differential use of some health care services. Caution No studies examined the relationship between oral literacy (speaking and listening skills) and outcomes. Implication Although it is challenging, we need to find feasible ways to improve patients' health literacy skills and reduce the negative effects of low health literacy on outcomes. The Editors The term health literacy refers to a set of skills that people need to function effectively in the health care environment (1). These skills include the ability to read and understand text and to locate and interpret information in documents (print literacy); use quantitative information for tasks, such as interpreting food labels, measuring blood glucose levels, and adhering to medication regimens (numeracy); and speak and listen effectively (oral literacy) (2, 3). Approximately 80 million U.S. adults are thought to have limited health literacy, which puts them at risk for poorer health outcomes. Rates of limited health literacy are higher among elderly, minority, and poor persons and those with less than a high school education (4). Numerous policy and advocacy organizations have expressed concern about barriers caused by low health literacy, notably the Institute of Medicine's report Health Literacy: A Prescription to End Confusion in 2004 (5) and the U.S. Department of Health and Human Services' report National Action Plan to Improve Health Literacy in 2010 (6). To understand the relationship between health literacy level and use of health care services, health outcomes, costs, and disparities in health outcomes, we conducted a systematic evidence review for the Agency for Healthcare Research and Quality (AHRQ) (published in 2004), which was limited to the relationship between print literacy and health outcomes (7). We found a consistent association between low health literacy (measured by reading skills) and more limited health-related knowledge and comprehension. The relationship between health literacy level and other outcomes was less clear, primarily because of a lack of studies and relatively unsophisticated methods in the available studies. In this review, we update and expand the earlier review (7). Since 2004, researchers have conducted new and more sophisticated studies. Thus, in synthesizing the literature, we can now consider the relationship between outcomes and health literacy (print literacy alone or combined with numeracy) and between outcomes and the numeracy component of health literacy alone. Methods We developed and followed a protocol that used standard AHRQ Evidence-based Practice Center methods. The full report describes study methods in detail and presents evidence tables for each included study (1). Literature Search We searched MEDLINE, CINAHL, the Cochrane Library, PsycINFO, and ERIC databases. For health literacy, our search dates were from 2003 to May 2010. For numeracy, they were from 1966 to May 2010; we began at an earlier date because numeracy was not addressed in our 2004 review. For this review, we updated our searches beyond what was included in the full report from May 2010 through 22 February 2011 to be current with the most recent literature. No Medical Subject Heading terms specifically identify health literacyrelated articles, so we conducted keyword searches, including health literacy, literacy, numeracy, and terms or phrases used to identify related measurement instruments. We also hand-searched reference lists of pertinent review articles and editorials. Appendix Table 1 shows the full search strategy. Appendix Table 1. Search Strategy Study Selection We included English-language studies on persons of all ages whose health literacy or that of their caregivers (including numeracy or oral health literacy) had been measured directly and had not been self-reported. Studies had to compare participants in relation to an outcome, including health care access and service use, health outcomes, and costs of care. For numeracy studies, outcomes also included knowledge, because our earlier review had established the relationship between only health literacy and knowledge. We did not examine outcomes concerning attitudes, social norms, or patientprovider relationships. Data Abstraction and Quality Assessment After determining article inclusion, 1 reviewer entered study data into evidence tables; a second, senior reviewer checked the information for accuracy and completeness. Two reviewers independently rated the quality of studies as good, fair, or poor by using criteria designed to detect potential risk of bias in an observational study (including selection bias, measurement bias, and control for potential confounding) and precision of measurement. Data Synthesis and Strength of Evidence We assessed the overall strength of the evidence for each outcome separately for studies measuring health literacy and those measuring numeracy on the basis of information only from good- and fair-quality studies. Using AHRQ guidance (8), we graded the strength of evidence as high, moderate, low, or insufficient on the basis of the potential risk of bias of included studies, consistency of effect across studies, directness of the evidence, and precision of the estimate (Table 1). We determined the grade on the basis of the literature from the update searches. We then considered whether the findings from the 2004 review would alter our conclusions. We graded the body of evidence for an outcome as low if the evidence was limited to 1 study that controlled for potential confounding variables or to several small studies in which all, or only some, controlled for potential confounding variables or as insufficient if findings across studies were inconsistent or were limited to 1 unadjusted study. Because of heterogeneity across studies in their approaches to measuring health literacy, numeracy, and outcomes, we summarized the evidence through consensus discussions and did not conduct any meta-analyses. Table 1. Strength of Evidence Grades and Definitions Role of the Funding Source AHRQ reviewed a draft report and provided copyright release for this manuscript. The funding source did not participate in conducting literature searches, determining study eligibility, evaluating individual studies, grading evidence, or interpreting results. Results First, we present the results from our literature search and a summary of characteristics across studies, followed by findings specific to health literacy then numeracy. We generally highlight evidence of moderate or high strength and mention only outcomes with low or insufficient evidence. Where relevant, we comment on the evidence provided through the 2004 review. Tables 2 and 3 summarize our findings and strength-of-evidence grade for each included health literacy and numeracy outcome, respectively. Table 2. Health Literacy Outcome Results: Strength of Evidence and Summary of Findings, 2004 and 2011 Table 3. Numeracy Outcome Results: Strength of Evidence and Summary of Findings, 2011 Characteristics of Reviewed Studies We identified 3823 citations and evaluated 1012 full-text articles (Appendix Figure). Ultimately, we included 96 studies rated as good or fair quality. These studies were reported in 111 articles because some investigators reported study results in multiple publications (98 articles on health literacy, 22 on numeracy, and 9 on both). We found no studies that examined outcomes by the oral (verbal) component of health literacy. Of the 111 articles, 100 were rated as fair quality. All studies were observational, primarily cross-sectional designs (91 of 111 articles). The Supplement (health literacy) and Appendix Table 2 (numeracy) present summary information for each included article. Supplement. Overview of Health Literacy Studies Appendix Figure. Summary of evidence search and selection. KQ = key question. Appendix Table 2. Overview of Numeracy Studies Studies varied in their measurement of health literacy and numeracy. Commonly used instruments to measure health literacy are the Rapid Estimate of Adult Literacy in Medicine (REALM) (9), the Test of Functional Health Literacy in Adults (TOFHLA) (10), and short TOFHLA (S-TOFHLA). Instruments frequently used to measure numeracy are the SchwartzWoloshin Numeracy Test (11) and the Wide Range Achievement Test (WRAT) math subtest (12). Studies also differed in how investigators distinguished between levels or thresholds of health literacyeither as a continuous measure or as categorical groups. Some studies identified 3 groups, often called inadequate, marginal, and adequate, whereas others combined 2 of the 3 groups. Because evidence was sparse for evaluating differences between marginal and adequate health literacy, our results focus on the differences between the lowest and highest groups. Studies in this update generally included multivariate analyses rather than simpler unadjusted analyses. They varied considerably, however, in regard to which potential confounding variables are controlled (Supplement and Appendix Table 2). All results reported here are from adjusted analyses that controlled for potential confounding variables, unless otherwise noted. Relationship Between Health Literacy and Outcomes Use of Health Care Services and Access to Care Emergency Care and Hospitalizations. Nine studies examining the risk for emergency care use (1321) and 6 examining the risk for hospitalizations (1419) provided moderate evidence showing increased use of both services among people with lower health literacy, including elderly persons, clinic and inner-city hospital patients, patients with asthma, and patients with congestive heart failure.", "title": "" }, { "docid": "3f1f3e66fa1a117ef5c2f44d8f7dcbe8", "text": "The Softmax function is used in the final layer of nearly all existing sequence-tosequence models for language generation. However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint. We propose a general technique for replacing the softmax layer with a continuous embedding layer. Our primary innovations are a novel probabilistic loss, and a training and inference procedure in which we generate a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax. We evaluate this new class of sequence-to-sequence models with continuous outputs on the task of neural machine translation. We show that our models train up to 2.5x faster than the state-of-the-art models while achieving comparable translation quality. These models are capable of handling very large vocabularies without compromising on translation quality or speed. They also produce more meaningful errors than the softmax-based models, as these errors typically lie in a subspace of the vector space of the reference translations1.", "title": "" }, { "docid": "8f75cc71e07209029947be095bf12b48", "text": "BACKGROUND\nGastroGard, an omeprazole powder paste formulation, is considered the standard treatment for gastric ulcers in horses and is highly effective. Gastrozol, an enteric-coated omeprazole formulation for horses, has recently become available, but efficacy data are controversial and sparse.\n\n\nOBJECTIVES\nTo investigate the efficacy of GastroGard and Gastrozol at labeled doses (4 and 1 mg of omeprazole per kg bwt, respectively, PO q24h) in healing of gastric ulcers.\n\n\nANIMALS\n40 horses; 9.5 ± 4.6 years; 491 ± 135 kg.\n\n\nMETHODS\nProspective, randomized, blinded study. Horses with an ulcer score ≥1 (Equine Gastric Ulcer Council) were randomly divided into 2 groups and treated for 2 weeks each with GastroGard followed by Gastrozol (A) or vice versa (B). After 2 and 4 weeks, scoring was repeated and compared with baseline. Plasma omeprazole concentrations were measured on the first day of treatment after administration of GastroGard (n = 5) or Gastrozol (n = 5).\n\n\nRESULTS\nCompared with baseline (squamous score (A) 1.65 ± 0.11, (B) 1.98 ± 0.11), ulcer scores at 2 weeks ((A) 0.89 ± 0.11, (B) 1.01 ± 0.11) and 4 weeks ((A) 1.10 ± 0.12, (B) 0.80 ± 0.12) had significantly decreased in both groups (P < .001), independent of treatment (P = .7). Plasma omeprazole concentrations were significantly higher after GastroGard compared with Gastrozol administration (AUCGG = 2856 (1405-4576) ng/mL × h, AUCGZ = 604 (430-1609) ng/mL × h; P = .03). The bioavailability for Gastrozol was 1.26 (95% CI 0.56-2.81) times higher than for GastroGard.\n\n\nCONCLUSIONS AND CLINICAL IMPORTANCE\nBoth Gastrozol and GastroGard, combined with appropriate environmental changes, promote healing of gastric ulcers in horses. However, despite enteric coating of Gastrozol, plasma omeprazole concentrations after single labeled doses were significantly higher with GastroGard.", "title": "" }, { "docid": "d0c43cf66df910094195bc3476cb8fa7", "text": "Global information systems development has become increasingly prevalent and is facing a variety of challenges, including the challenge of cross-cultural management. However, research on exactly how cross-cultural factors affect global information systems development work is limited, especially with respect to distributed collaborative work between the U.S. and China. This paper draws on the interviews of Chinese IT professionals and discusses three emergent themes relevant to cross-cultural challenges: the complexity of language issues, culture and communication styles and work behaviors, and cultural understandings at different levels. Implications drawn from our findings will provide actionable knowledge to organizational management entities.", "title": "" }, { "docid": "0c3fa5b92d95abb755f12dda030474c2", "text": "This paper examines the hypothesis that the persistence of low spatial and marital mobility in rural India, despite increased growth rates and rising inequality in recent years, is due to the existence of sub-caste networks that provide mutual insurance to their members. Unique panel data providing information on income, assets, gifts, loans, consumption, marriage, and migration are used to link caste networks to household and aggregate mobility. Our key finding, consistent with the hypothesis that local risk-sharing networks restrict mobility, is that among households with the same (permanent) income, those in higher-income caste networks are more likely to participate in caste-based insurance arrangements and are less likely to both out-marry and out-migrate. At the aggregate level, the networks appear to have coped successfully with the rising inequality within sub-castes that accompanied the Green Revolution. The results suggest that caste networks will continue to smooth consumption in rural India for the foreseeable future, as they have for centuries, unless alternative consumption-smoothing mechanisms of comparable quality become available. ∗We are very grateful to Andrew Foster for many useful discussions that substantially improved the paper. We received helpful comments from Jan Eeckhout, Rachel Kranton, Ethan Ligon and seminar participants at Arizona, Chicago, Essex, Georgetown, Harvard, IDEI, ITAM, LEA-INRA, LSE, Ohio State, UCLA, and NBER. Alaka Holla provided excellent research assistance. Research support from NICHD grant R01-HD046940 and NSF grant SES-0431827 is gratefully acknowledged. †Brown University and NBER ‡Yale University", "title": "" }, { "docid": "c1438a335a41da3b61e6ca1100b97074", "text": "What dimensions can be identified in the trust formation processes in Business-to-Consumer (B-to-C) electronic commerce (e-commerce)? How do these differ in importance between academia and practitioners? The purpose of this research is to build a model of multidimensional trust formation for online exchanges in B-to-C electronic commerce. Further, to study the relative importance of the dimensions between two expert groups (academics and practitioners), two semantic network and content analyses are conducted: one for academia’s perspectives and another for practitioners’ perspectives of trust in B-to-C electronic commerce. The results show that the two perspectives are divergent in some ways and complementary in other ways. We believe that the two need to be combined to represent meaningful trust-building mechanisms in websites. D 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "3b145aa14e1052467f78b911cda4109b", "text": "Dual Connectivity(DC) is one of the key technologies standardized in Release 12 of the 3GPP specifications for the Long Term Evolution (LTE) network. It attempts to increase the per-user throughput by allowing the user equipment (UE) to maintain connections with the MeNB (master eNB) and SeNB (secondary eNB) simultaneously, which are inter-connected via non-ideal backhaul. In this paper, we focus on one of the use cases of DC whereby the downlink U-plane data is split at the MeNB and transmitted to the UE via the associated MeNB and SeNB concurrently. In this case, out-of-order packet delivery problem may occur at the UE due to the delay over the non-ideal backhaul link, as well as the dynamics of channel conditions over the MeNB-UE and SeNB-UE links, which will introduce extra delay for re-ordering the packets. As a solution, we propose to adopt the RaptorQ FEC code to encode the source data at the MeNB, and then the encoded symbols are separately transmitted through the MeNB and SeNB. The out-of-order problem can be effectively eliminated since the UE can decode the original data as long as it receives enough encoded symbols from either the MeNB or SeNB. We present detailed protocol design for the RaptorQ code based concurrent transmission scheme, and simulation results are provided to illustrate the performance of the proposed scheme.", "title": "" } ]
scidocsrr
c4d3df5773c0686708f2243139f1150b
Blockchain for smart grid resilience: Exchanging distributed energy at speed, scale and security
[ { "docid": "779c0081af334a597f6ee6942d7e7240", "text": "We document our experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind. Since smart contracts deal directly with the movement of valuable currency units between contratual parties, security of a contract program is of paramount importance. Our lab exposed numerous common pitfalls in designing safe and secure smart contracts. We document several typical classes of mistakes students made, suggest ways to fix/avoid them, and advocate best practices for programming smart contracts. Finally, our pedagogical efforts have also resulted in online open course materials for programming smart contracts, which may be of independent interest to the community.", "title": "" }, { "docid": "383258ea128ba901add3d854224f0ddb", "text": "We present an architecture for peer-to-peer energy markets which can guarantee that operational constraints are respected and payments are fairly rendered, without relying on a centralized utility or microgrid aggregator. We demonstrate how to address trust, security, and transparency issues by using blockchains and smart contracts, two emerging technologies which can facilitate decentralized coordination between non-trusting agents. While blockchains are receiving considerable interest as a platform for distributed computation and data management, this is the first work to examine their use to facilitate distributed optimization and control. Using the Alternating Direction Method of Multipliers (ADMM), we pose a decentralized optimal power flow (OPF) model for scheduling a mix of batteries, shapable loads, and deferrable loads on an electricity distribution network. The DERs perform local optimization steps, and a smart contract on the blockchain serves as the ADMM coordinator, allowing the validity and optimality of the solution to be verified. The optimal schedule is securely stored on the blockchain, and payments can be automatically, securely, and trustlessly rendered without requiring a microgrid operator.", "title": "" } ]
[ { "docid": "e0275564adaec32cfc5724bc90b66e1f", "text": "In order to have an effective command of the relationship between customers and products, we have constructed a personalized recommender system which incorporates content-based, collaborative filtering, and data mining techniques. We have also introduced a new scoring approach to determine customers’ interest scores on products. To demonstrate how our system works, we used it to analyze real cosmetic data and generate a recommender score table for sellers to refer to. After tracking its performance for 1 year, we have obtained quite impressive results. q 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3a855c3c3329ff63037711e8d17249e3", "text": "In this work, we present an adaptation of the sequence-tosequence model for structured vision tasks. In this model, the output variables for a given input are predicted sequentially using neural networks. The prediction for each output variable depends not only on the input but also on the previously predicted output variables. The model is applied to spatial localization tasks and uses convolutional neural networks (CNNs) for processing input images and a multi-scale deconvolutional architecture for making spatial predictions at each step. We explore the impact of weight sharing with a recurrent connection matrix between consecutive predictions, and compare it to a formulation where these weights are not tied. Untied weights are particularly suited for problems with a fixed sized structure, where different classes of output are predicted at different steps. We show that chain models achieve top performing results on human pose estimation from images and videos.", "title": "" }, { "docid": "573faddaa6fe37712776592a430d09cb", "text": "We present the largest and longest measurement of online tracking to date based on real users. The data, which is made publicly available, is generated from more than 780 million page loads over the course of the last 10 months. Previous attempts to measure the tracking ecosystem, are done via measurement platforms that do not interact with websites the same way a user does. We instrument a crowd-sourced measurement of third-parties across the web via users who consent to data collection via a browser extension. The collection is done with privacy-by-design in mind, and introduces no privacy side effects. This approach overcomes limitations of previous work by collecting real web usage across multiple countries, ISP and browser configurations, and on difficult to crawl pages, such as those behind logins, giving a more accurate portrayal of the online-tracking ecosystem. The data1, which we plan to continue contributing to and maintain in the future, and WhoTracks.Me website – the living representation of the data, are available for researchers, regulators, journalists, web developers and users to detect tracking behaviours, analyse the tracking landscape, develop efficient tools, devise policies and raise awareness of the negative externalities tracking introduces. We believe this work provides the transparency needed to shine a light on a very opaque industry.", "title": "" }, { "docid": "381a11fe3d56d5850ec69e2e9427e03f", "text": "We present an approximation algorithm that takes a pool of pre-trained models as input and produces from it a cascaded model with similar accuracy but lower average-case cost. Applied to state-of-the-art ImageNet classification models, this yields up to a 2x reduction in floating point multiplications, and up to a 6x reduction in average-case memory I/O. The auto-generated cascades exhibit intuitive properties, such as using lower-resolution input for easier images and requiring higher prediction confidence when using a computationally cheaper model.", "title": "" }, { "docid": "08e02afe2ef02fc9c8fff91cf7a70553", "text": "Matrix factorization is a fundamental technique in machine learning that is applicable to collaborative filtering, information retrieval and many other areas. In collaborative filtering and many other tasks, the objective is to fill in missing elements of a sparse data matrix. One of the biggest challenges in this case is filling in a column or row of the matrix with very few observations. In this paper we introduce a Bayesian matrix factorization model that performs regression against side information known about the data in addition to the observations. The side information helps by adding observed entries to the factored matrices. We also introduce a nonparametric mixture model for the prior of the rows and columns of the factored matrices that gives a different regularization for each latent class. Besides providing a richer prior, the posterior distribution of mixture assignments reveals the latent classes. Using Gibbs sampling for inference, we apply our model to the Netflix Prize problem of predicting movie ratings given an incomplete user-movie ratings matrix. Incorporating rating information with gathered metadata information, our Bayesian approach outperforms other matrix factorization techniques even when using fewer dimensions.", "title": "" }, { "docid": "f102cc8d3ba32f9a16f522db25143e2d", "text": "As technology advances man-machine interaction is becoming an unavoidable activity. So an effective method of communication with machines enhances the quality of life. If it is able to operate a system by simply commanding, then it will be a great blessing to the users. Speech is the most effective mode of communication used by humans. So by introducing voice user interfaces the interaction with the machines can be made more user friendly. This paper implements a speaker independent speech recognition system for limited vocabulary Malayalam Words in Raspberry Pi. Mel Frequency Cepstral Coefficients (MFCC) are the features for classification and this paper proposes Radial Basis Function (RBF) kernel in Support Vector Machine (SVM) classifier gives better accuracy in speech recognition than linear kernel. An overall accuracy of 91.8% is obtained with this work.", "title": "" }, { "docid": "9800cb574743679b4517818c9653ada5", "text": "This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs). Unlike existing methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We minimize the reconstruction error of the nonlinear responses, subject to a low-rank constraint which helps to reduce the complexity of filters. We develop an effective solution to this constrained nonlinear optimization problem. An algorithm is also presented for reducing the accumulated error when multiple layers are approximated. A whole-model speedup ratio of 4× is demonstrated on a large network trained for ImageNet, while the top-5 error rate is only increased by 0.9%. Our accelerated model has a comparably fast speed as the “AlexNet” [11], but is 4.7% more accurate.", "title": "" }, { "docid": "c62a2f7fae5d56617b71ffc070a30839", "text": "Digitization brings new possibilities to ease our daily life activities by the means of assistive technology. Amazon Alexa, Microsoft Cortana, Samsung Bixby, to name only a few, heralded the age of smart personal assistants (SPAs), personified agents that combine artificial intelligence, machine learning, natural language processing and various actuation mechanisms to sense and influence the environment. However, SPA research seems to be highly fragmented among different disciplines, such as computer science, human-computer-interaction and information systems, which leads to ‘reinventing the wheel approaches’ and thus impede progress and conceptual clarity. In this paper, we present an exhaustive, integrative literature review to build a solid basis for future research. We have identified five functional principles and three research domains which appear promising for future research, especially in the information systems field. Hence, we contribute by providing a consolidated, integrated view on prior research and lay the foundation for an SPA classification scheme.", "title": "" }, { "docid": "e4e372287a5d53bd3926705e01b43235", "text": "The regular gathering of student information has created a high level of complexity, and also an incredible opportunity for teachers to enhance student learning experience. The digital information that learners leave online about their interests, engagement and their preferences gives significant measures of information that can be mined to customise their learning experience better. The motivation behind this article is to inspect the quickly developing field of Learning Analytics and to study why and how enormous information will benefit teachers, institutes, online course developers and students as a whole. The research will discuss the advancement in Big Data and how is it useful in education, along with an overview of the importance of various stakeholders and the challenges that lie ahead. We also look into the tools and techniques that are put into practice to realize the benefits of Analytics in Education. Our results suggest that this field has the immense scope of development but ethical and privacy issues present a challenge.", "title": "" }, { "docid": "4dd403bbecb8d03ebdd8de9923ee629b", "text": "Phishing is a major problem on the Web. Despite the significant attention it has received over the years, there has been no definitive solution. While the state-of-the-art solutions have reasonably good performance, they require a large amount of training data and are not adept at detecting phishing attacks against new targets. In this paper, we begin with two core observations: (a) although phishers try to make a phishing webpage look similar to its target, they do not have unlimited freedom in structuring the phishing webpage, and (b) a webpage can be characterized by a small set of key terms, how these key terms are used in different parts of a webpage is different in the case of legitimate and phishing webpages. Based on these observations, we develop a phishing detection system with several notable properties: it requires very little training data, scales well to much larger test data, is language-independent, fast, resilient to adaptive attacks and implemented entirely on client-side. In addition, we developed a target identification component that can identify the target website that a phishing webpage is attempting to mimic. The target detection component is faster than previously reported systems and can help minimize false positives in our phishing detection system.", "title": "" }, { "docid": "6914ba1e0a6a60a9d8956f9b9429ab45", "text": "Quantum cognition research applies abstract, mathematical principles of quantum theory to inquiries in cognitive science. It differs fundamentally from alternative speculations about quantum brain processes. This topic presents new developments within this research program. In the introduction to this topic, we try to answer three questions: Why apply quantum concepts to human cognition? How is quantum cognitive modeling different from traditional cognitive modeling? What cognitive processes have been modeled using a quantum account? In addition, a brief introduction to quantum probability theory and a concrete example is provided to illustrate how a quantum cognitive model can be developed to explain paradoxical empirical findings in psychological literature.", "title": "" }, { "docid": "d6a80510eaf935268aec872e2d9112e0", "text": "SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established.\n We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack.\n The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations.", "title": "" }, { "docid": "14e75e14ba61e01ae905cbf0ba0879b3", "text": "A new Kalman-filter based active contour model is proposed for tracking of nonrigid objects in combined spatio-velocity space. The model employs measurements of gradient-based image potential and of optical-flow along the contour as system measurements. In order to improve robustness to image clutter and to occlusions an optical-flow based detection mechanism is proposed. The method detects and rejects spurious measurements which are not consistent with previous estimation of image motion.", "title": "" }, { "docid": "0e3a65ac8f3369f26517dea0ead150c7", "text": "While some web search users know exactly what they are looking for, others are willing to explore topics related to an initial interest. Often, the user’s initial interest can be uniquely linked to an entity in a knowledge base. In this case, it is natural to recommend the explicitly linked entities for further exploration. In real world knowledge bases, however, the number of linked entities may be very large and not all related entities may be equally relevant. Thus, there is a need for ranking related entities. In this paper, we describe Spark, a recommendation engine that links a user’s initial query to an entity within a knowledge base and provides a ranking of the related entities. Spark extracts several signals from a variety of data sources, including Yahoo! Web Search, Twitter, and Flickr, using a large cluster of computers running Hadoop. These signals are combined with a machine learned ranking model in order to produce a final recommendation of entities to user queries. This system is currently powering Yahoo! Web Search result pages.", "title": "" }, { "docid": "94bfebd26c5aa21c7ebafb37bba30f70", "text": "The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning.", "title": "" }, { "docid": "cd94c463b84e2e57e3f6ef010bab4eec", "text": "Caffeine is probably the most frequently ingested pharmacologically active substance in the world. It is found in common beverages (coffee, tea, soft drinks), in products containing cocoa or chocolate, and in medications. Because of its wide consumption at different levels by most segments of the population, the public and the scientific community have expressed interest in the potential for caffeine to produce adverse effects on human health. The possibility that caffeine ingestion adversely affects human health was investigated based on reviews of (primarily) published human studies obtained through a comprehensive literature search. Based on the data reviewed, it is concluded that for the healthy adult population, moderate daily caffeine intake at a dose level up to 400 mg day(-1) (equivalent to 6 mg kg(-1) body weight day(-1) in a 65-kg person) is not associated with adverse effects such as general toxicity, cardiovascular effects, effects on bone status and calcium balance (with consumption of adequate calcium), changes in adult behaviour, increased incidence of cancer and effects on male fertility. The data also show that reproductive-aged women and children are 'at risk' subgroups who may require specific advice on moderating their caffeine intake. Based on available evidence, it is suggested that reproductive-aged women should consume </=300 mg caffeine per day (equivalent to 4.6 mg kg(-1) bw day(-1) for a 65-kg person) while children should consume </=2.5 mg kg(-1) bw day(-1).", "title": "" }, { "docid": "add80fd9c0cb935a5868e0b31c1d7432", "text": "Adders are the basic building block in the arithmetic circuits. In order to achieve high speed and low power consumption a 32bit carry skip adder is proposed. In the conventional technique, a hybrid variable latency extension is used with a method called as parallel prefix network (Brent-Kung). As a result, larger delay along with higher power consumption is obtained, which is the main drawback for any VLSI applications. In order to overcome this, Han Carlson adder along with CSA is used to design parallel prefix network. Therefore it reduces delay and power consumption. The proposed structure is designed by using HSPICE simulation tool. Therefore, a lower delay and low power consumption can be achieved in the benchmark circuits. Keyword: High speed, low delay, efficient power consumption and size.", "title": "" }, { "docid": "47e17d9a02c6a97188108b49f67f986b", "text": "Driver's gaze direction is an indicator of driver state and plays a significantly role in driving safety. Traditional gaze zone estimation methods based on eye model have disadvantages due to the vulnerability under large head movement. Different from these methods, an appearance-based head pose-free eye gaze prediction method is proposed in this paper, for driver gaze zone estimation under free head movement. To achieve this goal, a gaze zone classifier is trained with head vectors and eye image features by random forest. The head vector is calculated by Pose from Orthography and Scaling with ITerations (POSIT) where a 3D face model is combined with facial landmark detection. And the eye image features are derived from eye images which extracted through eye region localization. These features are presented as the combination of sparse coefficients by sparse encoding with eye image dictionary, having good potential to carry information of the eye images. Experimental results show that the proposed method is applicable in real driving environment.", "title": "" }, { "docid": "cb2f5ac9292df37860b02313293d2f04", "text": "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner — with the objective of searching for individuals that have similar pattern of behavior as the known seeds — based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube’s engagement graph spanning over a", "title": "" } ]
scidocsrr
bd350c8fef15aacdde0c205b4aab44c3
Learning Structured Semantic Embeddings for Visual Recognition
[ { "docid": "574a2a883f4b97793e5264b6f7beb073", "text": "We address the problem of weakly supervised object localization where only image-level annotations are available for training. Many existing approaches tackle this problem through object proposal mining. However, a substantial amount of noise in object proposals causes ambiguities for learning discriminative object models. Such approaches are sensitive to model initialization and often converge to an undesirable local minimum. In this paper, we address this problem by progressive domain adaptation with two main steps: classification adaptation and detection adaptation. In classification adaptation, we transfer a pre-trained network to our multi-label classification task for recognizing the presence of a certain object in an image. In detection adaptation, we first use a mask-out strategy to collect class-specific object proposals and apply multiple instance learning to mine confident candidates. We then use these selected object proposals to fine-tune all the layers, resulting in a fully adapted detection network. We extensively evaluate the localization performance on the PASCAL VOC and ILSVRC datasets and demonstrate significant performance improvement over the state-of-the-art methods.", "title": "" }, { "docid": "df163d94fbf0414af1dde4a9e7fe7624", "text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.", "title": "" } ]
[ { "docid": "76fc5bf9bc5b5d6d19e30537ce0b173d", "text": "Data Stream Management Systems (DSMS) are crucial for modern high-volume/high-velocity data-driven applications, necessitating a distributed approach to processing them. In addition, data providers often require certain levels of confidentiality for their data, especially in cases of user-generated data, such as those coming out of physical activity/health tracking devices (i.e., our motivating application). This demonstration will showcase Synefo, an infrastructure that enables elastic scaling of DSMS operators, and CryptStream, a framework that provides confidentiality and access controls for data streams while allowing computation on untrusted servers, fused as CE-Storm. We will demonstrate both systems working in tandem and also visualize their behavior over time under different scenarios.", "title": "" }, { "docid": "c0c7752c6b9416e281c3649e70f9daae", "text": "Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median.", "title": "" }, { "docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa", "text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking", "title": "" }, { "docid": "f262ccb0c19c84b51d48eb735fdaa54e", "text": "The nutritional quality of food and beverage products sold in vending machines has been implicated as a contributing factor to the development of an obesogenic food environment. How comprehensive, reliable, and valid are the current assessment tools for vending machines to support or refute these claims? A systematic review was conducted to summarize, compare, and evaluate the current methodologies and available tools for vending machine assessment. A total of 24 relevant research studies published between 1981 and 2013 met inclusion criteria for this review. The methodological variables reviewed in this study include assessment tool type, study location, machine accessibility, product availability, healthfulness criteria, portion size, price, product promotion, and quality of scientific practice. There were wide variations in the depth of the assessment methodologies and product healthfulness criteria utilized among the reviewed studies. Of the reviewed studies, 39% evaluated machine accessibility, 91% evaluated product availability, 96% established healthfulness criteria, 70% evaluated portion size, 48% evaluated price, 52% evaluated product promotion, and 22% evaluated the quality of scientific practice. Of all reviewed articles, 87% reached conclusions that provided insight into the healthfulness of vended products and/or vending environment. Product healthfulness criteria and complexity for snack and beverage products was also found to be variable between the reviewed studies. These findings make it difficult to compare results between studies. A universal, valid, and reliable vending machine assessment tool that is comprehensive yet user-friendly is recommended.", "title": "" }, { "docid": "cd68bb753f25843b6706de39ffb3073d", "text": "The project aims to give convenience for searching best matched cuisine in Yelp review dataset, and also translate English reviews to German by attention neural machine translation. It’s a great fun to explore Natural Language Processing applications in search engine, besides improving the distributed performance for large dataset. Our project and demo focus on the high performance of retrieval valuable Yelp reviews and direct real-time hyperlinks to business’ homepage with query terms of cuisine; our report will focus on illustrating zigzag discovery path on \"the-state-of-the-art\" neural machine translation. We have spent much time on acquiring new NLP knowledge, learnt tf-seq2seq by TensorFlow, trained translationmodels on GPU servers, and translated the reviewing dataset from English to German.", "title": "" }, { "docid": "1e8e4364427d18406594af9ad3a73a28", "text": "The Internet Addiction Scale (IAS) is a self-report instrument based on the 7 Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) substance dependence criteria and 2 additional criteria recommended by Griffiths (1998). The IAS was administered to 233 undergraduates along with 4 measures pertaining to loneliness and boredom proneness. An item reliability analysis reduced the initial scale from 36 to 31 items (with a Cronbach's alpha of .95). A principal-components analysis indicated that the IAS consisted mainly of one factor. Multiple regression analyses revealed that Family and Social Loneliness and Boredom Proneness were significantly correlated with the IAS; Family and Social Loneliness uniquely predicted IAS scores. No evidence for widespread Internet addiction was found.", "title": "" }, { "docid": "3b4b1386322c820f15086e5953fa1ac4", "text": "A key goal in natural language generation (NLG) is to enable fast generation even with large vocabularies, grammars and worlds. In this work, we build upon a recently proposed NLG system, Sentence Tree Realization with UCT (STRUCT). We describe four enhancements to this system: (i) pruning the grammar based on the world and the communicative goal, (ii) intelligently caching and pruning the combinatorial space of semantic bindings, (iii) reusing the lookahead search tree at different search depths, and (iv) learning and using a search control heuristic. We evaluate the resulting system on three datasets of increasing size and complexity, the largest of which has a vocabulary of about 10K words, a grammar of about 32K lexicalized trees and a world with about 11K entities and 23K relations between them. Our results show that the system has a median generation time of 8.5s and finds the best sentence on average within 25s. These results are based on a sequential, interpreted implementation and are significantly better than the state of the art for planningbased NLG systems.", "title": "" }, { "docid": "c592f46ffd8286660b9e233127cefea7", "text": "According to literature, penetration pricing is the dominant pricing strategy for network effect markets. In this paper we show that diffusion of products in a network effect market does not only vary with the set of pricing strategies chosen by competing vendors but also strongly depends on the topological structure of the customers' network. This stresses the inappropriateness of classical \"installed base\" models (abstracting from this structure). Our simulations show that although competitive prices tend to be significantly higher in close topology markets, they lead to lower total profit and lower concentration of vendors' profit in these markets.", "title": "" }, { "docid": "17ec5256082713e85c819bb0a0dd3453", "text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.", "title": "" }, { "docid": "5f862920548a825a20c1a860c0ef20ca", "text": "A recommendation system tracks past actions of a group of users to make recommendations to individual members of the group. The growth of computer-mediated marketing and commerce has led to increased interest in such systems. We introduce a simple analytical framework for recommendation systems, including a basis for defining the utilit y of such a system. We perform probabilistic analyses of algorithmic methods within this framework. These analyses yield insights into how much utility can be derived from the memory of past actions and on how this memory can be exploited.", "title": "" }, { "docid": "c5113ff741d9e656689786db10484a07", "text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.", "title": "" }, { "docid": "353761bae5088e8ee33025fc04695297", "text": " Land use can exert a powerful influence on ecological systems, yet our understanding of the natural and social factors that influence land use and land-cover change is incomplete. We studied land-cover change in an area of about 8800 km2 along the lower part of the Wisconsin River, a landscape largely dominated by agriculture. Our goals were (a) to quantify changes in land cover between 1938 and 1992, (b) to evaluate the influence of abiotic and socioeconomic variables on land cover in 1938 and 1992, and (c) to characterize the major processes of land-cover change between these two points in time. The results showed a general shift from agricultural land to forest. Cropland declined from covering 44% to 32% of the study area, while forests and grassland both increased (from 32% to 38% and from 10% to 14% respectively). Multiple linear regressions using three abiotic and two socioeconomic variables captured 6% to 36% of the variation in land-cover categories in 1938 and 9% to 46% of the variation in 1992. Including socioeconomic variables always increased model performance. Agricultural abandonment and a general decline in farming intensity were the most important processes of land-cover change among the processes considered. Areas characterized by the different processes of land-cover change differed in the abiotic and socioeconomic variables that had explanatory power and can be distinguished spatially. Understanding the dynamics of landscapes dominated by human impacts requires methods to incorporate socioeconomic variables and anthropogenic processes in the analyses. Our method of hypothesizing and testing major anthropogenic processes may be a useful tool for studying the dynamics of cultural landscapes.", "title": "" }, { "docid": "e17ad914854d148d5ca8000bdcab4298", "text": "BACKGROUND\nThe introduction of proton pump inhibitors (PPIs) into clinical practice has revolutionized the management of acid-related diseases. Studies in primary care and emergency settings suggest that PPIs are frequently prescribed for inappropriate indications or for indications where their use offers little benefit. Inappropriate PPI use is a matter of great concern, especially in the elderly, who are often affected by multiple comorbidities and are taking multiple medications, and are thus at an increased risk of long-term PPI-related adverse outcomes as well as drug-to-drug interactions. Herein, we aim to review the current literature on PPI use and develop a position paper addressing the benefits and potential harms of acid suppression with the purpose of providing evidence-based guidelines on the appropriate use of these medications.\n\n\nMETHODS\nThe topics, identified by a Scientific Committee, were assigned to experts selected by three Italian Scientific Societies, who independently performed a systematic search of the relevant literature using Medline/PubMed, Embase, and the Cochrane databases. Search outputs were distilled, paying more attention to systematic reviews and meta-analyses (where available) representing the best evidence. The draft prepared on each topic was circulated amongst all the members of the Scientific Committee. Each expert then provided her/his input to the writing, suggesting changes and the inclusion of new material and/or additional relevant references. The global recommendations were then thoroughly discussed in a specific meeting, refined with regard to both content and wording, and approved to obtain a summary of current evidence.\n\n\nRESULTS\nTwenty-five years after their introduction into clinical practice, PPIs remain the mainstay of the treatment of acid-related diseases, where their use in gastroesophageal reflux disease, eosinophilic esophagitis, Helicobacter pylori infection, peptic ulcer disease and bleeding as well as, and Zollinger-Ellison syndrome is appropriate. Prevention of gastroduodenal mucosal lesions (and symptoms) in patients taking non-steroidal anti-inflammatory drugs (NSAIDs) or antiplatelet therapies and carrying gastrointestinal risk factors also represents an appropriate indication. On the contrary, steroid use does not need any gastroprotection, unless combined with NSAID therapy. In dyspeptic patients with persisting symptoms, despite successful H. pylori eradication, short-term PPI treatment could be attempted. Finally, addition of PPIs to pancreatic enzyme replacement therapy in patients with refractory steatorrhea may be worthwhile.\n\n\nCONCLUSIONS\nOverall, PPIs are irreplaceable drugs in the management of acid-related diseases. However, PPI treatment, as any kind of drug therapy, is not without risk of adverse effects. The overall benefits of therapy and improvement in quality of life significantly outweigh potential harms in most patients, but those without clear clinical indication are only exposed to the risks of PPI prescription. Adhering with evidence-based guidelines represents the only rational approach to effective and safe PPI therapy. Please see related Commentary: doi: 10.1186/s12916-016-0724-1 .", "title": "" }, { "docid": "9961f44d4ab7d0a344811186c9234f2c", "text": "This paper discusses the trust related issues and arguments (evidence) Internet stores need to provide in order to increase consumer trust. Based on a model of trust from academic literature, in addition to a model of the customer service life cycle, the paper develops a framework that identifies key trust-related issues and organizes them into four categories: personal information, product quality and price, customer service, and store presence. It is further validated by comparing the issues it raises to issues identified in a review of academic studies, and to issues of concern identified in two consumer surveys. The framework is also applied to ten well-known web sites to demonstrate its applicability. The proposed framework will benefit both practitioners and researchers by identifying important issues regarding trust, which need to be accounted for in Internet stores. For practitioners, it provides a guide to the issues Internet stores need to address in their use of arguments. For researchers, it can be used as a foundation for future empirical studies investigating the effects of trust-related arguments on consumers’ trust in Internet stores.", "title": "" }, { "docid": "746bb0b7ed159fcfbe7940a33e6debf1", "text": "Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimization problem. A simple way to enforce the Lipschitz constraint on the class of functions, which can be modeled by the neural network, is weight clipping. Augmenting the loss by a regularization term that penalizes the deviation of the gradient norm of the critic (as a function of the network’s input) from one, was proposed as an alternative that improves training. We present theoretical arguments why using a weaker regularization term enforcing the Lipschitz constraint is preferable. These arguments are supported by experimental results on several data sets.", "title": "" }, { "docid": "6f2dbfcce622454579c607bf7a8a2797", "text": "A new 3D graphics and multimedia hardware architecture, cod named Talisman, is described which exploits both spatial and temporal coherence to reduce the cost of high quality animatio Individually animated objects are rendered into independent image layers which are composited together at video refresh ra to create the final display. During the compositing process, a fu affine transformation is applied to the layers to allow translatio rotation, scaling and skew to be used to simulate 3D motion of objects, thus providing a multiplier on 3D rendering performan and exploiting temporal image coherence. Image compression broadly exploited for textures and image layers to reduce imag capacity and bandwidth requirements. Performance rivaling hi end 3D graphics workstations can be achieved at a cost point two to three hundred dollars.", "title": "" }, { "docid": "b0d456d92d3cb9d6e1fb5372f3819951", "text": "“Clothes make the man,” said Mark Twain. This article presents a survey of the literature on Artificial Intelligence applications to clothing fashion. An AIbased stylist model is proposed based on fundamental fashion theory and the early work of AI in fashion. This study examines three essential components of a complete styling task as well as previously launched applications and earlier research work. Additionally, the implementation and performance of Neural Networks, Genetic Algorithms, Support Vector Machines and other AI methods used in the fashion domain are discussed in detail. This article explores the focus of previous studies and provides a general overview of the usage of AI techniques in the fashion domain.", "title": "" }, { "docid": "5ce93a1c09b4da41f0cc920d5c7e6bdc", "text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.", "title": "" }, { "docid": "4b74b9d4c4b38082f9f667e363f093b2", "text": "We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text. Textpresso can be accessed at http://www.textpresso.org or via WormBase at http://www.wormbase.org.", "title": "" }, { "docid": "281a9d0c9ad186c1aabde8c56c41cefa", "text": "Hardware manipulations pose a serious threat to numerous systems, ranging from a myriad of smart-X devices to military systems. In many attack scenarios an adversary merely has access to the low-level, potentially obfuscated gate-level netlist. In general, the attacker possesses minimal information and faces the costly and time-consuming task of reverse engineering the design to identify security-critical circuitry, followed by the insertion of a meaningful hardware Trojan. These challenges have been considered only in passing by the research community. The contribution of this work is threefold: First, we present HAL, a comprehensive reverse engineering and manipulation framework for gate-level netlists. HAL allows automating defensive design analysis (e.g., including arbitrary Trojan detection algorithms with minimal effort) as well as offensive reverse engineering and targeted logic insertion. Second, we present a novel static analysis Trojan detection technique ANGEL which considerably reduces the false-positive detection rate of the detection technique FANCI. Furthermore, we demonstrate that ANGEL is capable of automatically detecting Trojans obfuscated with DeTrust. Third, we demonstrate how a malicious party can semi-automatically inject hardware Trojans into third-party designs. We present reverse engineering algorithms to disarm and trick cryptographic self-tests, and subtly leak cryptographic keys without any a priori knowledge of the design’s internal workings.", "title": "" } ]
scidocsrr
84ceca462bb655e036cc43e9b1124984
Computing on the Edge of Chaos: Structure and Randomness in Encrypted Computation
[ { "docid": "d92b7ee3739843c2649d0f3f1e0ee5b2", "text": "In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertextpacking technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of “General-LWE”). Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with “general-LWE” schemes, suggesting yet another tradeoff that can be optimized for different settings. Acknowledgments The first author is sponsored by DARPA under agreement number FA8750-11-C-0096. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. The second and third authors are sponsored by DARPA and ONR under agreement number N00014-11C-0390. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, or the U.S. Government. Distribution Statement “A” (Approved for Public Release, Distribution Unlimited).", "title": "" }, { "docid": "5b0eef5eed1645ae3d88bed9b20901b9", "text": "We present a radically new approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry’s bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2 security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with Õ(λ · L) per-gate computation – i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is Õ(λ), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results for LWE, but with worse performance. We introduce a number of further optimizations to our schemes. As an example, for circuits of large width – e.g., where a constant fraction of levels have width at least λ – we can reduce the per-gate computation of the bootstrapped version to Õ(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω̃(λ) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011). ∗Sponsored by the Air Force Research Laboratory (AFRL). Disclaimer: This material is based on research sponsored by DARPA under agreement number FA8750-11-C-0096 and FA8750-11-2-0225. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. Approved for Public Release, Distribution Unlimited. †This material is based on research sponsored by DARPA under Agreement number FA8750-11-2-0225. All disclaimers as above apply.", "title": "" } ]
[ { "docid": "a1494d0c89a4eca3ef4d38d577f5621a", "text": "Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the ground-truth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. In addition, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: foreground object segmentation and object proposal detection.", "title": "" }, { "docid": "4b988535edefeb3ff7df89bcb900dd1c", "text": "Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE). Objective: As the STCE research area has matured and the number of related studies has increased, it is important to systematically categorize the current state-of-the-art and to provide an overview of the trends in this field. Such summarized and categorized results provide many benefits to the broader community. For example, they are valuable resources for new researchers (e.g., PhD students) aiming to conduct additional secondary studies. Method: In this work, we systematically classify the body of knowledge related to STCE through a systematic mapping (SM) study. As part of this study, we pose a set of research questions, define selection and exclusion criteria, and systematically develop and refine a systematic map. Results: Our study pool includes a set of 60 studies published in the area of STCE between 1999 and 2012. Our mapping data is available through an online publicly-accessible repository. We derive the trends for various aspects of STCE. Among our results are the following: (1) There is an acceptable mix of papers with respect to different contribution facets in the field of STCE and the top two leading facets are tool (68%) and method (65%). The studies that presented new processes, however, had a low rate (3%), which denotes the need for more process-related studies in this area. (2) Results of investigation about research facet of studies and comparing our result to other SM studies shows that, similar to other fields in software engineering, STCE is moving towards more rigorous validation approaches. (3) A good mixture of STCE activities has been presented in the primary studies. Among them, the two leading activities are quality assessment and co-maintenance of test-code with production code. The highest growth rate for co-maintenance activities in recent years shows the importance and challenges involved in this activity. (4) There are two main categories of quality assessment activity: detection of test smells and oracle assertion adequacy. (5) JUnit is the leading test framework which has been used in about 50% of the studies. (6) There is a good mixture of SUT types used in the studies: academic experimental systems (or simple code examples), real open-source and commercial systems. (7) Among 41 tools that are proposed for STCE, less than half of the tools (45%) were available for download. It is good to have this percentile of tools to be available, although not perfect, since the availability of tools can lead to higher impact on research community and industry. Conclusion: We discuss the emerging trends in STCE, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing STCE approaches and spot areas in the field that require more attention from the", "title": "" }, { "docid": "00b851715df7fe4878f74796df9d8061", "text": "Low duty-cycle mobile systems can benefit from ultra-low power deep neural network (DNN) accelerators. Analog in-memory computational units are used to store synaptic weights in on-chip non-volatile arrays and perform current-based calculations. In-memory computation entirely eliminates off-chip weight accesses, parallelizes operation, and amortizes readout power costs by reusing currents. The proposed system achieves 900nW measured power, with an estimated energy efficiency of 0.012pJ/MAC in a 130nm SONOS process.", "title": "" }, { "docid": "f5fd1d6f15c9ef06c343378a6f7038a0", "text": "Wayfinding is part of everyday life. This study concentrates on the development of a conceptual model of human navigation in the U.S. Interstate Highway Network. It proposes three different levels of conceptual understanding that constitute the cognitive map: the Planning Level, the Instructional Level, and the Driver Level. This paper formally defines these three levels and examines the conceptual objects that comprise them. The problem treated here is a simpler version of the open problem of planning and navigating a multi-mode trip. We expect the methods and preliminary results found here for the Interstate system to apply to other systems such as river transportation networks and railroad networks.", "title": "" }, { "docid": "36a0b3223b83927f4dfe358086f2a660", "text": "We train a set of state of the art neural networks, the Maxout networks (Goodfellow et al., 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct storing formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the storage on the final error of the training. We find that very low precision storage is sufficient not just for running trained networks but also for training them. For example, Maxout networks state-of-the-art results are nearly maintained with 10 bits for storing activations and gradients, and 12 bits for storing parameters.", "title": "" }, { "docid": "a89c471c0ad38741eaf48a83970da456", "text": "Phenotypic and functional heterogeneity arise among cancer cells within the same tumour as a consequence of genetic change, environmental differences and reversible changes in cell properties. Some cancers also contain a hierarchy in which tumorigenic cancer stem cells differentiate into non-tumorigenic progeny. However, it remains unclear what fraction of cancers follow the stem-cell model and what clinical behaviours the model explains. Studies using lineage tracing and deep sequencing could have implications for the cancer stem-cell model and may help to determine the extent to which it accounts for therapy resistance and disease progression.", "title": "" }, { "docid": "7d01463ce6dd7e7e08ebaf64f6916b1d", "text": "An effective location algorithm, which considers nonline-of-sight (NLOS) propagation, is presented. By using a new variable to replace the square term, the problem becomes a mathematical programming problem, and then the NLOS propagation’s effect can be evaluated. Compared with other methods, the proposed algorithm has high accuracy.", "title": "" }, { "docid": "7d0badaeeb94658690f0809c134d3963", "text": "Vascular tissue engineering is an area of regenerative medicine that attempts to create functional replacement tissue for defective segments of the vascular network. One approach to vascular tissue engineering utilizes seeding of biodegradable tubular scaffolds with stem (and/or progenitor) cells wherein the seeded cells initiate scaffold remodeling and prevent thrombosis through paracrine signaling to endogenous cells. Stem cells have received an abundance of attention in recent literature regarding the mechanism of their paracrine therapeutic effect. However, very little of this mechanistic research has been performed under the aegis of vascular tissue engineering. Therefore, the scope of this review includes the current state of TEVGs generated using the incorporation of stem cells in biodegradable scaffolds and potential cell-free directions for TEVGs based on stem cell secreted products. The current generation of stem cell-seeded vascular scaffolds are based on the premise that cells should be obtained from an autologous source. However, the reduced regenerative capacity of stem cells from certain patient groups limits the therapeutic potential of an autologous approach. This limitation prompts the need to investigate allogeneic stem cells or stem cell secreted products as therapeutic bases for TEVGs. The role of stem cell derived products, particularly extracellular vesicles (EVs), in vascular tissue engineering is exciting due to their potential use as a cell-free therapeutic base. EVs offer many benefits as a therapeutic base for functionalizing vascular scaffolds such as cell specific targeting, physiological delivery of cargo to target cells, reduced immunogenicity, and stability under physiological conditions. However, a number of points must be addressed prior to the effective translation of TEVG technologies that incorporate stem cell derived EVs such as standardizing stem cell culture conditions, EV isolation, scaffold functionalization with EVs, and establishing the therapeutic benefit of this combination treatment.", "title": "" }, { "docid": "d3883fe900e7b541b17990fb8533832f", "text": "\"Environmental impact assessment\" denotes the attempt to predict and assess the impact of development projects on the environment. A component dealing specifically with human health is often called an \"environmental health impact assessment.\" It is widely held that such impact assessment offers unique opportunities for the protection and promotion of human health. The following components were identified as key elements of an integrated environmental health impact assessment model: project analysis, analysis of status quo (including regional analysis, population analysis, and background situation), prediction of impact (including prognosis of future pollution and prognosis of health impact), assessment of impact, recommendations, communication of results, and evaluation of the overall procedure. The concept was applied to a project of extending a waste disposal facility and to a city bypass highway project. Currently, the coverage of human health aspects in environmental impact assessment still tends to be incomplete, and public health departments often do not participate. Environmental health impact assessment as a tool for health protection and promotion is underutilized. It would be useful to achieve consensus on a comprehensive generic concept. An international initiative to improve the situation seems worth some consideration.", "title": "" }, { "docid": "4de1ea43b95330901620bd2f69865029", "text": "Recent trends towards increasing complexity in distributed embedded real-time systems pose challenges in designing and implementing a reliable system such as a self-driving car. The conventional way of improving reliability is to use redundant hardware to replicate the whole (sub)system. Although hardware replication has been widely deployed in hard real-time systems such as avionics, space shuttles and nuclear power plants, it is significantly less attractive to many applications because the amount of necessary hardware multiplies as the size of the system increases. The growing needs of flexible system design are also not consistent with hardware replication techniques. To address the needs of dependability through redundancy operating in real-time, we propose a layer called SAFER(System-level Architecture for Failure Evasion in Real-time applications) to incorporate configurable task-level fault-tolerance features to tolerate fail-stop processor and task failures for distributed embedded real-time systems. To detect such failures, SAFER monitors the health status and state information of each task and broadcasts the information. When a failure is detected using either time-based failure detection or event-based failure detection, SAFER reconfigures the system to retain the functionality of the whole system. We provide a formal analysis of the worst-case timing behaviors of SAFER features. We also describe the modeling of a system equipped with SAFER to analyze timing characteristics through a model-based design tool called SysWeaver. SAFER has been implemented on Ubuntu 10.04 LTS and deployed on Boss, an award-winning autonomous vehicle developed at Carnegie Mellon University. We show various measurements using simulation scenarios used during the 2007 DARPA Urban Challenge. Finally, we present a case study of failure recovery by SAFER when node failures are injected.", "title": "" }, { "docid": "611eacd767f1ea709c1c4aca7acdfcdb", "text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.", "title": "" }, { "docid": "e95257d099750281c83d98af2e194b34", "text": "This paper presents a real-coded memetic algorithm that applies a crossover hill-climbing to solutions produced by the genetic operators. On the one hand, the memetic algorithm provides global search (reliability) by means of the promotion of high levels of population diversity. On the other, the crossover hill-climbing exploits the self-adaptive capacity of real-parameter crossover operators with the aim of producing an effective local tuning on the solutions (accuracy). An important aspect of the memetic algorithm proposed is that it adaptively assigns different local search probabilities to individuals. It was observed that the algorithm adjusts the global/local search balance according to the particularities of each problem instance. Experimental results show that, for a wide range of problems, the method we propose here consistently outperforms other real-coded memetic algorithms which appeared in the literature.", "title": "" }, { "docid": "65bc99201599ec17347d3fe0857cd39a", "text": "Many children strive to attain excellence in sport. However, although talent identification and development programmes have gained popularity in recent decades, there remains a lack of consensus in relation to how talent should be defined or identified and there is no uniformly accepted theoretical framework to guide current practice. The success rates of talent identification and development programmes have rarely been assessed and the validity of the models applied remains highly debated. This article provides an overview of current knowledge in this area with special focus on problems associated with the identification of gifted adolescents. There is a growing agreement that traditional cross-sectional talent identification models are likely to exclude many, especially late maturing, 'promising' children from development programmes due to the dynamic and multidimensional nature of sport talent. A conceptual framework that acknowledges both genetic and environmental influences and considers the dynamic and multidimensional nature of sport talent is presented. The relevance of this model is highlighted and recommendations for future work provided. It is advocated that talent identification and development programmes should be dynamic and interconnected taking into consideration maturity status and the potential to develop rather than to exclude children at an early age. Finally, more representative real-world tasks should be developed and employed in a multidimensional design to increase the efficacy of talent identification and development programmes.", "title": "" }, { "docid": "768a8cfff3f127a61f12139466911a94", "text": "The metabolism of NAD has emerged as a key regulator of cellular and organismal homeostasis. Being a major component of both bioenergetic and signaling pathways, the molecule is ideally suited to regulate metabolism and major cellular events. In humans, NAD is synthesized from vitamin B3 precursors, most prominently from nicotinamide, which is the degradation product of all NAD-dependent signaling reactions. The scope of NAD-mediated regulatory processes is wide including enzyme regulation, control of gene expression and health span, DNA repair, cell cycle regulation and calcium signaling. In these processes, nicotinamide is cleaved from NAD(+) and the remaining ADP-ribosyl moiety used to modify proteins (deacetylation by sirtuins or ADP-ribosylation) or to generate calcium-mobilizing agents such as cyclic ADP-ribose. This review will also emphasize the role of the intermediates in the NAD metabolome, their intra- and extra-cellular conversions and potential contributions to subcellular compartmentalization of NAD pools.", "title": "" }, { "docid": "36db2c06d65576e03e00017a9060fd24", "text": "Real-world relations among entities can o‰en be observed and determined by different perspectives/views. For example, the decision made by a user on whether to adopt an item relies on multiple aspects such as the contextual information of the decision, the item’s aŠributes, the user’s profile and the reviews given by other users. Different views may exhibit multi-way interactions among entities and provide complementary information. In this paper, we introduce a multi-tensor-based approach that can preserve the underlying structure of multi-view data in a generic predictive model. Specifically, we propose structural factorization machines (SFMs) that learn the common latent spaces shared by multi-view tensors and automatically adjust the importance of each view in the predictive model. Furthermore, the complexity of SFMs is linear in the number of parameters, which make SFMs suitable to large-scale problems. Extensive experiments on real-world datasets demonstrate that the proposed SFMs outperform several state-of-the-art methods in terms of prediction accuracy and computational cost. CCS CONCEPTS •Computingmethodologies→Machine learning; Supervised learning; Factorization methods;", "title": "" }, { "docid": "c9ad1daa4ee0d900c1a2aa9838eb9918", "text": "A central question in human development is how young children gain knowledge so fast. We propose that analogical generalization drives much of this early learning and allows children to generate new abstractions from experience. In this paper, we review evidence for analogical generalization in both children and adults. We discuss how analogical processes interact with the child's changing knowledge base to predict the course of learning, from conservative to domain-general understanding. This line of research leads to challenges to existing assumptions about learning. It shows that (a) it is not enough to consider the distribution of examples given to learners; one must consider the processes learners are applying; (b) contrary to the general assumption, maximizing variability is not always the best route for maximizing generalization and transfer.", "title": "" }, { "docid": "7605c3ae299d7e23c383eea352da81da", "text": "Demands for very high system capacity and end-user data rates of the order of 10 Gb/s can be met in localized environments by Ultra-Dense Networks (UDN), characterized as networks with very short inter-site distances capable of ensuring low interference levels during communications. UDNs are expected to operate in the millimeter-wave band, where wide bandwidth signals needed for such high data rates can be designed, and will rely on high-gain beamforming to mitigate path loss and ensure low interference. The dense deployment of infrastructure nodes will make traditional wire-based backhaul provisioning challenging. Wireless self-backhauling over multiple hops is proposed to enhance flexibility in deployment. A description of the architecture and a concept based on separation of mobility, radio resource coordination among multiple nodes, and data plane handling, as well as on integration with wide-area networks, is introduced. A simulation of a multi-node office environment is used to demonstrate the performance of wireless self-backhauling at various loads.", "title": "" }, { "docid": "8123ab525ce663e44b104db2cacd59a9", "text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.", "title": "" }, { "docid": "2b53b125dc8c79322aabb083a9c991e4", "text": "Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author’s location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain “location indicative words”. We then evaluate the impact of nongeotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability. We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems.", "title": "" }, { "docid": "7551b0023dd92888ac229ffda4dfd29e", "text": "Ever since the inception of mobile telephony, the downlink and uplink of cellular networks have been coupled, that is, mobile terminals have been constrained to associate with the same base station in both the downlink and uplink directions. New trends in network densification and mobile data usage increase the drawbacks of this constraint, and suggest that it should be revisited. In this article we identify and explain five key arguments in favor of downlink/uplink decoupling based on a blend of theoretical, experimental, and architectural insights. We then overview the changes needed in current LTE-A mobile systems to enable this decoupling, and then look ahead to fifth generation cellular standards. We demonstrate that decoupling can lead to significant gains in network throughput, outage, and power consumption at a much lower cost compared to other solutions that provide comparable or lower gains.", "title": "" } ]
scidocsrr
926006dcfa25620bb315783faa4ddf36
Basic level scene understanding: categories, attributes and structures
[ { "docid": "8674128201d80772040446f1ab6a7cd1", "text": "In this paper, we present an attribute graph grammar for image parsing on scenes with man-made objects, such as buildings, hallways, kitchens, and living moms. We choose one class of primitives - 3D planar rectangles projected on images and six graph grammar production rules. Each production rule not only expands a node into its components, but also includes a number of equations that constrain the attributes of a parent node and those of its children. Thus our graph grammar is context sensitive. The grammar rules are used recursively to produce a large number of objects and patterns in images and thus the whole graph grammar is a type of generative model. The inference algorithm integrates bottom-up rectangle detection which activates top-down prediction using the grammar rules. The final results are validated in a Bayesian framework. The output of the inference is a hierarchical parsing graph with objects, surfaces, rectangles, and their spatial relations. In the inference, the acceptance of a grammar rule means recognition of an object, and actions are taken to pass the attributes between a node and its parent through the constraint equations associated with this production rule. When an attribute is passed from a child node to a parent node, it is called bottom-up, and the opposite is called top-down", "title": "" }, { "docid": "225204d66c371372debb3bb2a37c795b", "text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.", "title": "" } ]
[ { "docid": "21af4ea62f07966097c8ab46f7226907", "text": "With the introduction of Microsoft Kinect, there has been considerable interest in creating various attractive and feasible applications in related research fields. Kinect simultaneously captures the depth and color information and provides real-time reliable 3D full-body human-pose reconstruction that essentially turns the human body into a controller. This article presents a finger-writing system that recognizes characters written in the air without the need for an extra handheld device. This application adaptively merges depth, skin, and background models for the hand segmentation to overcome the limitations of the individual models, such as hand-face overlapping problems and the depth-color nonsynchronization. The writing fingertip is detected by a new real-time dual-mode switching method. The recognition accuracy rate is greater than 90 percent for the first five candidates of Chinese characters, English characters, and numbers.", "title": "" }, { "docid": "57fa4164381d9d9691b9ba5c506addbd", "text": "The aim of this study was to evaluate the acute effects of unilateral ankle plantar flexors static-stretching (SS) on the passive range of movement (ROM) of the stretched limb, surface electromyography (sEMG) and single-leg bounce drop jump (SBDJ) performance measures of the ipsilateral stretched and contralateral non-stretched lower limbs. Seventeen young men (24 ± 5 years) performed SBDJ before and after (stretched limb: immediately post-stretch, 10 and 20 minutes and non-stretched limb: immediately post-stretch) unilateral ankle plantar flexor SS (6 sets of 45s/15s, 70-90% point of discomfort). SBDJ performance measures included jump height, impulse, time to reach peak force, contact time as well as the sEMG integral (IEMG) and pre-activation (IEMGpre-activation) of the gastrocnemius lateralis. Ankle dorsiflexion passive ROM increased in the stretched limb after the SS (pre-test: 21 ± 4° and post-test: 26.5 ± 5°, p < 0.001). Post-stretching decreases were observed with peak force (p = 0.029), IEMG (P<0.001), and IEMGpre-activation (p = 0.015) in the stretched limb; as well as impulse (p = 0.03), and jump height (p = 0.032) in the non-stretched limb. In conclusion, SS effectively increased passive ankle ROM of the stretched limb, and transiently (less than 10 minutes) decreased muscle peak force and pre-activation. The decrease of jump height and impulse for the non-stretched limb suggests a SS-induced central nervous system inhibitory effect. Key pointsWhen considering whether or not to SS prior to athletic activities, one must consider the potential positive effects of increased ankle dorsiflexion motion with the potential deleterious effects of power and muscle activity during a simple jumping task or as part of the rehabilitation process.Since decreased jump performance measures can persist for 10 minutes in the stretched leg, the timing of SS prior to performance must be taken into consideration.Athletes, fitness enthusiasts and therapists should also keep in mind that SS one limb has generalized effects upon contralateral limbs as well.", "title": "" }, { "docid": "9d5d667c6d621bd90a688c993065f5df", "text": "Creative individuals increasingly rely on online crowdfunding platforms to crowdsource funding for new ventures. For novice crowdfunding project creators, however, there are few resources to turn to for assistance in the planning of crowdfunding projects. We are building a tool for novice project creators to get feedback on their project designs. One component of this tool is a comparison to existing projects. As such, we have applied a variety of machine learning classifiers to learn the concept of a successful online crowdfunding project at the time of project launch. Currently our classifier can predict with roughly 68% accuracy, whether a project will be successful or not. The classification results will eventually power a prediction segment of the proposed feedback tool. Future work involves turning the results of the machine learning algorithms into human-readable content and integrating this content into the feedback tool.", "title": "" }, { "docid": "56316a77e260d8122c4812d684f4d223", "text": "Manipulation fundamentally requires a manipulator to be mechanically coupled to the object being manipulated. A consideration of the physical constraints imposed by dynamic interaction shows that control of a vector quantity such as position or force is inadequate and that control of the manipulator impedance is also necessary. Techniques for control of manipulator behaviour are presented which result in a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance.", "title": "" }, { "docid": "a1758acf5b65d054dd8a354cedc8e412", "text": "Given a health-related question (such as \"I have a bad stomach ache. What should I do?\"), a medical self-diagnosis Android inquires further information from the user, diagnoses the disease, and ultimately recommend best solutions. One practical challenge to build such an Android is to ask correct questions and obtain most relevant information, in order to correctly pinpoint the most likely causes of health conditions. In this paper, we tackle this challenge, named \"relevant symptom question generation\": Given a limited set of patient described symptoms in the initial question (e.g., \"stomach ache\"), what are the most critical symptoms to further ask the patient, in order to correctly diagnose their potential problems? We propose an augmented long short-term memory (LSTM) framework, where the network architecture can naturally incorporate the inputs from embedding vectors of patient described symptoms and an initial disease hypothesis given by a predictive model. Then the proposed framework generates the most important symptom questions. The generation process essentially models the conditional probability to observe a new and undisclosed symptom, given a set of symptoms from a patient as well as an initial disease hypothesis. Experimental results show that the proposed model obtains improvements over alternative methods by over 30% (both precision and mean ordinal distance).", "title": "" }, { "docid": "d6f15e49f3ecdbe3e2949520c3e0c643", "text": "In this paper we explore the connection between clustering categorical data and entropy: clusters of similar poi lower entropy than those of dissimilar ones. We use this connection to design an incremental heuristic algorithm, COOLCAT, which is capable of efficiently clustering large data sets of records with categorical attributes, and data streams. In contrast with other categorical clustering algorithms published in the past, COOLCAT's clustering results are very stable for different sample sizes and parameter settings. Also, the criteria for clustering is a very intuitive one, since it is deeply rooted on the well-known notion of entropy. Most importantly, COOLCAT is well equipped to deal with clustering of data streams(continuously arriving streams of data point) since it is an incremental algorithm capable of clustering new points without having to look at every point that has been clustered so far. We demonstrate the efficiency and scalability of COOLCAT by a series of experiments on real and synthetic data sets.", "title": "" }, { "docid": "b222ca1f92bdc9eeb068a76f39c2bcbe", "text": "INTRODUCTION\nRespiratory tract infections are common, and these infections occur frequently in children, susceptible adults, and older persons. The risk for recurrences and complications relates not only to the presence of viruses but also to immune function. Therefore, modulation of the immune system and antiviral interventions such as echinacea might reduce the risk of recurrences and possibly the development of complications.\n\n\nMETHODS\nMEDLINE, EMBASE, CAplus, BIOSIS, CABA, AGRICOLA, TOXCENTER, SCISEARCH, NAHL, and NAPRALERT were searched for clinical trials that studied recurrent respiratory infections and complications on treatment with echinacea extracts in a generally healthy population. Two independent reviewers selected randomized, placebo-controlled studies of high methodological quality and a Jadad score of ≥4. Relative risks (RRs) with 95% confidence intervals (CIs) were calculated according to a fixed effect model.\n\n\nRESULTS\nSix clinical studies with a total of 2458 participants were included in the meta-analysis. Use of echinacea extracts was associated with reduced risk of recurrent respiratory infections (RR 0.649, 95% CI 0.545-0.774; P < 0.0001). Ethanolic extracts from echinacea appeared to provide superior effects over pressed juices, and increased dosing during acute episodes further enhanced these effects. Three independent studies found that in individuals with higher susceptibility, stress or a state of immunological weakness, echinacea halved the risk of recurrent respiratory infections (RR 0.501, 95% CI 0.380-0.661; P < 0.0001). Similar preventive effects were observed with virologically confirmed recurrent infections (RR 0.420, 95% CI 0.222-0.796; P = 0.005). Complications including pneumonia, otitis media/externa, and tonsillitis/pharyngitis were also less frequent with echinacea treatment (RR 0.503, 95% CI 0.384-0.658; P < 0.0001).\n\n\nCONCLUSION\nEvidence indicates that echinacea potently lowers the risk of recurrent respiratory infections and complications thereof. Immune modulatory, antiviral, and anti-inflammatory effects might contribute to the observed clinical benefits, which appear strongest in susceptible individuals.", "title": "" }, { "docid": "74a9612c1ca90a9d7b6152d19af53d29", "text": "Collective entity disambiguation, or collective entity linking aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extent to which these entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, this paper shows that the semantic relationships between mentioned entities within a document are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity, and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the problem of entity disambiguation. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design Pair-Linking, a novel iterative solution for the MINTREE optimization problem. The idea of Pair-Linking is simple: instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments on 8 benchmark datasets, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms.", "title": "" }, { "docid": "ddfc7c8b86ceb96935f0567e7cfb79f8", "text": "This Short Review critically evaluates three hypotheses about the effects of emotion on memory: First, emotion usually enhances memory. Second, when emotion does not enhance memory, this can be understood by the magnitude of physiological arousal elicited, with arousal benefiting memory to a point but then having a detrimental influence. Third, when emotion facilitates the processing of information, this also facilitates the retention of that same information. For each of these hypotheses, we summarize the evidence consistent with it, present counter-evidence suggesting boundary conditions for the effect, and discuss the implications for future research.", "title": "" }, { "docid": "a7202e304c01d07c39b0adf96f4e4930", "text": "Augmented Reality has attracted interest for its p otential as a platform for new compelling usages. This paper provides an overview of technical challenges in imaging and optics encountered in near-eye optical see-through AR disp lay systems. OCIS codes: (000.4930) Other topics of general interest; (000. 2170) Equipment and techniques", "title": "" }, { "docid": "2b53e3494d58b2208f95d5bb67589677", "text": "In his paper ‘Logic and conversation’ Grice (1989: 37) introduced a distinction between generalized and particularized conversational implicatures. His notion of a generalized conversational implicature (GCI) has been developed in two competing directions, by neo-Griceans such as Horn (1989) and Levinson (1983, 1987b, 1995, 2000) on the one hand, and relevance theorists such as Sperber & Wilson (1986) and Carston (1988, 1993, 1995, 1997, 1998a,b) on the other. Levinson defends the claim that GCIs are inferred on the basis of a set of default heuristics that are triggered by the presence of certain sorts of lexical items. These default inferences will be drawn unless something unusual in the context blocks them. Carston reconceives GCIs as contents that a speaker directly communicates, rather than as contents that are merely conversationally implicated. GCIs are treated as pragmatic developments of semantically underspecified logical forms. They are not the products of default inferences, since what is communicated depends heavily on the specific context, and not merely on the presence or absence of certain lexical items. We introduce two processing models, the Default Model and the Underspecified Model, that are inspired by these rival theoretical views. This paper describes an eye monitoring experiment that is intended to test the predictions of these two models. Our primary concern is to make a case for the claim that it is fruitful to apply an eye tracking methodology to an area of pragmatic research that has not previously been explored from a processing perspective.", "title": "" }, { "docid": "6a98bdf2e01b340fb1e9a79b233fed80", "text": "Strategic alignment or \"fit\" is a notion that is deemed crucial in understanding how organizations can translate their deployment of information technology (IT) into actual increases in performance. While previous theoretical and methodological works have provided foundations for identifying the dimensions and performance impacts of the strategic alignment between IT, strategy, and structure, few attempts have been made to test the proposed theory empirically and operationalize fit systemically. Based on a gestalt perspective of fit and theory-based ideal coalignment patterns, an operational model of strategic alignment is proposed and empirically validated through a mail survey of 110 small firms. Using cluster analysis, it was found that low-performance firms exhibited a conflictual coalignment pattern of business strategy, business structure, IT strategy, and IT structure that distinguished them from other firms. © 2003 Elsevier B. V. Ali rights reserved.", "title": "" }, { "docid": "d0a99703d292fd60792e6167daab20e9", "text": "We study cooperative navigation for robotic swarms in the context of a general event-servicing scenario. In the scenario, one or more events need to be serviced at specific locations by robots with the required skills. We focus on the question of how the swarm can inform its members about events, and guide robots to event locations. We propose a solution based on delay-tolerant wireless communications: by forwarding navigation information between them, robots cooperatively guide each other towards event locations. Such a collaborative approach leverages on the swarm’s intrinsic redundancy, distribution, and mobility. At the same time, the forwarding of navigation messages is the only form of cooperation that is required. This means that the robots are free in terms of their movement and location, and they can be involved in other tasks, unrelated to the navigation of the searching robot. This gives the system a high level of flexibility in terms of application scenarios, and a high degree of robustness with respect to robot failures or unexpected events. We study the algorithm in two different scenarios, both in simulation and on real robots. In the first scenario, a single searching robot needs to find a single target, while all other robots are involved in tasks of their own. In the second scenario, we study collective navigation: all robots of the swarm navigate back and forth between two targets, which is a typical scenario in swarm robotics. We show that in this case, the proposed algorithm gives rise to synergies in robot navigation, and it lets the swarm self-organize into a robust dynamic structure. The emergence of this structure improves navigation efficiency and lets the swarm find shortest paths.", "title": "" }, { "docid": "84fe6840461b63a5ccf007450f0eeef8", "text": "The canonical Wnt cascade has emerged as a critical regulator of stem cells. In many tissues, activation of Wnt signalling has also been associated with cancer. This has raised the possibility that the tightly regulated self-renewal mediated by Wnt signalling in stem and progenitor cells is subverted in cancer cells to allow malignant proliferation. Insights gained from understanding how the Wnt pathway is integrally involved in both stem cell and cancer cell maintenance and growth in the intestinal, epidermal and haematopoietic systems may serve as a paradigm for understanding the dual nature of self-renewal signals.", "title": "" }, { "docid": "36c11c29f6605f7c234e68ecba2a717a", "text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.", "title": "" }, { "docid": "3c81e6ff0e7b2eb509cea08904bdeaf3", "text": "A novel ultra wideband (UWB) bandpass filter with double notch-bands is presented in this paper. Multilayer schematic is adopted to achieve compact size. Stepped impedance resonators (SIRs), which can also suppress harmonic response, are designed on top and second layers, respectively, and broadside coupling technique is used to achieve tight couplings for a wide passband. Folded SIRs that can provide desired notch-bands are designed on the third layer and coupled underneath the second layer SIRs. The designed prototype is fabricated using multilayer liquid crystal polymer (LCP) technology. Good agreement between simulated and measured response is observed. The fabricated filter has dual notch-bands with center frequencies of 6.4/8.0 GHz with 3 dB bandwidths of 9.5%/13.4% and high rejection levels up to 26.4 dB and 43.7 dB at 6.4/8.0 GHz are observed, respectively. It also has low-insertion losses and flat group delay in passbands, and excellent stopband rejection level higher than 30.0 dB from 11.4 GHz to 18.0 GHz.", "title": "" }, { "docid": "d51f0b51f03e310dd183e3a7cb199288", "text": "Traditional vision-based localization methods such as visual SLAM suffer from practical problems in outdoor environments such as unstable feature detection and inability to perform location recognition under lighting, perspective, weather and appearance change. Additionally map construction on a large scale in these systems presents its own challenges. In this work, we present a novel method for precisely localizing vehicles on the road using signs marked on the road (road markings), which have the advantage of being distinct and easy to detect, their detection being robust under changes in lighting and weather. Our method uses corners detected on road markings to perform localization in global coordinates. The method consists of two phases - a mapping phase when a high-quality GPS device is used to automatically survey road marks and add them to a light-weight “map” or database, and a localization phase where road mark detection and look-up in the map, combined with visual odometry, produces precise localization. We present experiments using a real-time implementation operating in a car that demonstrates the improved localization robustness and accuracy of our system even when using road marks alone. However, in this case the trajectory between road marks has to be filled-in by visual odometry, which contributes drift. Hence, we also present a mechanism for combining road-mark-based maps with sparse feature-based maps that results in greater accuracy still. We see our use of road marks as a significant step in the general trend of using higher-level features for improved localization performance irrespective of environment conditions.", "title": "" }, { "docid": "7e45fad555bd3b9a2504a1133f1fc9b2", "text": "Research studies in the past decade have shown that computer technology is an effective means for widening educational opportunities, but most teachers neither use technology as an instructional delivery system nor integrate technology into their curriculum. Studies reveal a number of factors influencing teachers’ decisions to use ICT in the classroom: non-manipulative and manipulative school and teacher factors. These factors are interrelated. The success of the implementation of ICT is not dependent on the availability or absence of one individual factor, but is determined through a dynamic process involving a set of interrelated factors. It is suggested that ongoing professional development must be provided for teachers to model the new pedagogies and tools for learning with the aim of enhancing the teaching-learning process. However, it is important for teacher trainers and policy makers to understand the factors affecting effectiveness and cost-effectiveness of different approaches to ICT use in teacher training so training strategies can be appropriately explored to make such changes viable to all.", "title": "" } ]
scidocsrr
5bbed6c30b7cef1945c29e36e8777be3
Intelligent irrigation system — An IOT based approach
[ { "docid": "0ef58b9966c7d3b4e905e8306aad3359", "text": "Agriculture is the back bone of India. To make the sustainable agriculture, this system is proposed. In this system ARM 9 processor is used to control and monitor the irrigation system. Different kinds of sensors are used. This paper presents a fully automated drip irrigation system which is controlled and monitored by using ARM9 processor. PH content and the nitrogen content of the soil are frequently monitored. For the purpose of monitoring and controlling, GSM module is implemented. The system informs user about any abnormal conditions like less moisture content and temperature rise, even concentration of CO2 via SMS through the GSM module.", "title": "" }, { "docid": "a50f168329c1b44ed881e99d66fe7c13", "text": "Indian agriculture is diverse; ranging from impoverished farm villages to developed farms utilizing modern agricultural technologies. Facility agriculture area in China is expanding, and is leading the world. However, its ecosystem control technology and system is still immature, with low level of intelligence. Promoting application of modern information technology in agriculture will solve a series of problems facing by farmers. Lack of exact information and communication leadsto the loss in production. Our paper is designed to over come these problems. This regulator provides an intelligent monitoring platform framework and system structure for facility agriculture ecosystem based on IOT[3]. This will be a catalyst for the transition from traditional farming to modern farming. This also provides opportunity for creating new technology and service development in IOT (internet of things) farming application. The Internet Of Things makes everything connected. Over 50 years since independence, India has made immense progress towards food productivity. The Indian population has tripled, but food grain production more than quadrupled[1]: there has thus been a substantial increase in available food grain per ca-pita. Modern agriculture practices have a great promise for the economic development of a nation. So we have brought-in an innovative project for the welfare of farmers and also for the farms. There are no day or night restrictions. This is helpful at any time.", "title": "" } ]
[ { "docid": "5251605df4db79f6a0fc2779a51938e2", "text": "Drug bioavailability to the developing brain is a major concern in the treatment of neonates and infants as well as pregnant and breast-feeding women. Central adverse drug reactions can have dramatic consequences for brain development, leading to major neurological impairment. Factors setting the cerebral bioavailability of drugs include protein-unbound drug concentration in plasma, local cerebral blood flow, permeability across blood-brain interfaces, binding to neural cells, volume of cerebral fluid compartments, and cerebrospinal fluid secretion rate. Most of these factors change during development, which will affect cerebral drug concentrations. Regarding the impact of blood-brain interfaces, the blood-brain barrier located at the cerebral endothelium and the blood-cerebrospinal fluid barrier located at the choroid plexus epithelium both display a tight phenotype early on in embryos. However, the developmental regulation of some multispecific efflux transporters that also limit the entry of numerous drugs into the brain through barrier cells is expected to favor drug penetration in the neonatal brain. Finally, drug cerebral bioavailability is likely to be affected following perinatal injuries that alter blood-brain interface properties. A thorough investigation of these mechanisms is mandatory for a better risk assessment of drug treatments in pregnant or breast-feeding women, and in neonate and pediatric patients.", "title": "" }, { "docid": "0b5ca91480dfff52de5c1d65c3b32f3d", "text": "Spotting anomalies in large multi-dimensional databases is a crucial task with many applications in finance, health care, security, etc. We introduce COMPREX, a new approach for identifying anomalies using pattern-based compression. Informally, our method finds a collection of dictionaries that describe the norm of a database succinctly, and subsequently flags those points dissimilar to the norm---with high compression cost---as anomalies.\n Our approach exhibits four key features: 1) it is parameter-free; it builds dictionaries directly from data, and requires no user-specified parameters such as distance functions or density and similarity thresholds, 2) it is general; we show it works for a broad range of complex databases, including graph, image and relational databases that may contain both categorical and numerical features, 3) it is scalable; its running time grows linearly with respect to both database size as well as number of dimensions, and 4) it is effective; experiments on a broad range of datasets show large improvements in both compression, as well as precision in anomaly detection, outperforming its state-of-the-art competitors.", "title": "" }, { "docid": "c9b9ac230838ffaff404784b66862013", "text": "On the Mathematical Foundations of Theoretical Statistics. Author(s): R. A. Fisher. Source: Philosophical Transactions of the Royal Society of London. Series A Solutions to Exercises. 325. Bibliography. 347. Index Discrete mathematics is an essential part of the foundations of (theoretical) computer science, statistics . 2) Statistical Methods by S.P.Gupta. 3) Mathematical Statistics by Saxena & Kapoor. 4) Statistics by Sancheti & Kapoor. 5) Introduction to Mathematical Statistics Fundamentals of Mathematical statistics by Guptha, S.C &Kapoor, V.K (Sulthan chand. &sons). 2. Introduction to Mathematical statistics by Hogg.R.V and and .", "title": "" }, { "docid": "bf65f2c68808755cfcd13e6cc7d0ccab", "text": "Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.", "title": "" }, { "docid": "3fcb9ab92334e3e214a7db08a93d5acd", "text": "BACKGROUND\nA growing body of literature indicates that physical activity can have beneficial effects on mental health. However, previous research has mainly focussed on clinical populations, and little is known about the psychological effects of physical activity in those without clinically defined disorders.\n\n\nAIMS\nThe present study investigates the association between physical activity and mental health in an undergraduate university population based in the United Kingdom.\n\n\nMETHOD\nOne hundred students completed questionnaires measuring their levels of anxiety and depression using the Hospital Anxiety and Depression Scale (HADS) and their physical activity regime using the Physical Activity Questionnaire (PAQ).\n\n\nRESULTS\nSignificant differences were observed between the low, medium and high exercise groups on the mental health scales, indicating better mental health for those who engage in more exercise.\n\n\nCONCLUSIONS\nEngagement in physical activity can be an important contributory factor in the mental health of undergraduate students.", "title": "" }, { "docid": "64d45fa63ac1ea987cec76bf69c4cc30", "text": "Recently, community psychologists have re-vamped a set of 18 competencies considered important for how we practice community psychology. Three competencies are: (1) ethical, reflexive practice, (2) community inclusion and partnership, and (3) community education, information dissemination, and building public awareness. This paper will outline lessons I-a white working class woman academic-learned about my competency development through my research collaborations, using the lens of affective politics. I describe three lessons, from school-based research sites (elementary schools serving working class students of color and one elite liberal arts school serving wealthy white students). The first lesson, from an elementary school, concerns ethical, reflective practice. I discuss understanding my affect as a barometer of my ability to conduct research from a place of solidarity. The second lesson, which centers community inclusion and partnership, illustrates how I learned about the importance of \"before the beginning\" conversations concerning social justice and conflict when working in elementary schools. The third lesson concerns community education, information dissemination, and building public awareness. This lesson, from a college, taught me that I could stand up and speak out against classism in the face of my career trajectory being threatened. With these lessons, I flesh out key aspects of community practice competencies.", "title": "" }, { "docid": "9d700ef057eb090336d761ebe7f6acb0", "text": "This article presents initial results on a supervised machine learning approach to determine the semantics of noun compounds in Dutch and Afrikaans. After a discussion of previous research on the topic, we present our annotation methods used to provide a training set of compounds with the appropriate semantic class. The support vector machine method used for this classification experiment utilizes a distributional lexical semantics representation of the compound’s constituents to make its classification decision. The collection of words that occur in the near context of the constituent are considered an implicit representation of the semantics of this constituent. Fscores were reached of 47.8% for Dutch and 51.1% for Afrikaans. Keywords—compound semantics; Afrikaans; Dutch; machine learning; distributional methods", "title": "" }, { "docid": "504377fd7a3b7c17d702d81d01a71bb6", "text": "We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10% over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speakerindependent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks.", "title": "" }, { "docid": "c953895c57d8906736352698a55c24a9", "text": "Data scientists and physicians are starting to use artificial intelligence (AI) even in the medical field in order to better understand the relationships among the huge amount of data coming from the great number of sources today available. Through the data interpretation methods made available by the recent AI tools, researchers and AI companies have focused on the development of models allowing to predict the risk of suffering from a specific disease, to make a diagnosis, and to recommend a treatment that is based on the best and most updated scientific evidence. Even if AI is used to perform unimaginable tasks until a few years ago, the awareness about the ongoing revolution has not yet spread through the medical community for several reasons including the lack of evidence about safety, reliability and effectiveness of these tools, the lack of regulation accompanying hospitals in the use of AI by health care providers, the difficult attribution of liability in case of errors and malfunctions of these systems, and the ethical and privacy questions that they raise and that, as of today, are still unanswered.", "title": "" }, { "docid": "44cf5669d05a759ab21b3ebc1f6c340d", "text": "Linear variable differential transformer (LVDT) sensors are widely used in hydraulic and pneumatic mechatronic systems for measuring physical quantities like displacement, force or pressure. The LVDT sensor consists of two magnetic coupled coils with a common core and this sensor converts the displacement of core into reluctance variation of magnetic circuit. LVDT sensors combines good accuracy (0.1 % error) with low cost, but they require relative complex electronics. Standard electronics for LVDT sensor conditioning is analog $the coupled coils constitute an inductive half-bridge supplied with 5 kHz sinus excitation from a quadrate oscillator. The output phase span is amplified and synchronous demodulated. This analog technology works well but has its drawbacks - hard to adjust, many components and packages, no connection to computer systems. To eliminate all these disadvantages, our team from \"Politehnica\" University of Bucharest has developed a LVDT signal conditioner using system on chip microcontroller MSP430F149 from Texas Instruments. This device integrates all peripherals required for LVDT signal conditioning (pulse width modulation modules, analog to digital converter, timers, enough memory resources and processing power) and offers also excellent low-power options. Resulting electronic module is a one-chip solution made entirely in SMD technology and its small dimensions allow its integration into sensor's body. Present paper focuses on specific issues of this digital solution for LVDT conditioning and compares it with classic analog solution from different points of view: error curve, power consumption, communication options, dimensions and production cost. Microcontroller software (firmware) and digital signal conditioning techniques for LVDT are also analyzed. Use of system on chip devices for signal conditioning allows realization of low cost compact transducers with same or better performances than their analog counterparts, but with extra options like serial communication channels, self-calibration, local storage of measured values and fault detection", "title": "" }, { "docid": "8b5ad6c53d58feefe975e481e2352c52", "text": "Virtual machine (VM) live migration is a critical feature for managing virtualized environments, enabling dynamic load balancing, consolidation for power management, preparation for planned maintenance, and other management features. However, not all virtual machine live migration is created equal. Variants include memory migration, which relies on shared backend storage between the source and destination of the migration, and storage migration, which migrates storage state as well as memory state. We have developed an automated testing framework that measures important performance characteristics of live migration, including total migration time, the time a VM is unresponsive during migration, and the amount of data transferred over the network during migration. We apply this testing framework and present the results of studying live migration, both memory migration and storage migration, in various virtualization systems including KVM, XenServer, VMware, and Hyper-V. The results provide important data to guide the migration decisions of both system administrators and autonomic cloud management systems.", "title": "" }, { "docid": "8791b422ebeb347294db174168bab439", "text": "Sleep is superior to waking for promoting performance improvements between sessions of visual perceptual and motor learning tasks. Few studies have investigated possible effects of sleep on auditory learning. A key issue is whether sleep specifically promotes learning, or whether restful waking yields similar benefits. According to the \"interference hypothesis,\" sleep facilitates learning because it prevents interference from ongoing sensory input, learning and other cognitive activities that normally occur during waking. We tested this hypothesis by comparing effects of sleep, busy waking (watching a film) and restful waking (lying in the dark) on auditory tone sequence learning. Consistent with recent findings for human language learning, we found that compared with busy waking, sleep between sessions of auditory tone sequence learning enhanced performance improvements. Restful waking provided similar benefits, as predicted based on the interference hypothesis. These findings indicate that physiological, behavioral and environmental conditions that accompany restful waking are sufficient to facilitate learning and may contribute to the facilitation of learning that occurs during sleep.", "title": "" }, { "docid": "a583bbf2deac0bf99e2790c47598cddd", "text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.", "title": "" }, { "docid": "54ef290e7c8fbc5c1bcd459df9bc4a06", "text": "Augmenter of Liver Regeneration (ALR) is a sulfhydryl oxidase carrying out fundamental functions facilitating protein disulfide bond formation. In mammals, it also functions as a hepatotrophic growth factor that specifically stimulates hepatocyte proliferation and promotes liver regeneration after liver damage or partial hepatectomy. Whether ALR also plays a role during vertebrate hepatogenesis is unknown. In this work, we investigated the function of alr in liver organogenesis in zebrafish model. We showed that alr is expressed in liver throughout hepatogenesis. Knockdown of alr through morpholino antisense oligonucleotide (MO) leads to suppression of liver outgrowth while overexpression of alr promotes liver growth. The small-liver phenotype in alr morphants results from a reduction of hepatocyte proliferation without affecting apoptosis. When expressed in cultured cells, zebrafish Alr exists as dimer and is localized in mitochondria as well as cytosol but not in nucleus or secreted outside of the cell. Similar to mammalian ALR, zebrafish Alr is a flavin-linked sulfhydryl oxidase and mutation of the conserved cysteine in the CxxC motif abolishes its enzymatic activity. Interestingly, overexpression of either wild type Alr or enzyme-inactive Alr(C131S) mutant promoted liver growth and rescued the liver growth defect of alr morphants. Nevertheless, alr(C131S) is less efficacious in both functions. Meantime, high doses of alr MOs lead to widespread developmental defects and early embryonic death in an alr sequence-dependent manner. These results suggest that alr promotes zebrafish liver outgrowth using mechanisms that are dependent as well as independent of its sulfhydryl oxidase activity. This is the first demonstration of a developmental role of alr in vertebrate. It exemplifies that a low-level sulfhydryl oxidase activity of Alr is essential for embryonic development and cellular survival. The dose-dependent and partial suppression of alr expression through MO-mediated knockdown allows the identification of its late developmental role in vertebrate liver organogenesis.", "title": "" }, { "docid": "8fa721c98dac13157bcc891c06561ec7", "text": "Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total child-care is not yet being promoted, there are indications that it is „on the cards‟. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care.", "title": "" }, { "docid": "b5f2b13b5266c30ba02ff6d743e4b114", "text": "The increasing scale, technology advances and services of modern networks have dramatically complicated their management such that in the near future it will be almost impossible for human administrators to monitor them. To control this complexity, IBM has introduced a promising approach aiming to create self-managed systems. This approach, called Autonomic Computing, aims to design computing equipment able to self-adapt its configuration and to self-optimize its performance depending on its situation in order to fulfill high-level objectives defined by the human operator. In this paper, we present our autonomic network management architecture (ANEMA) that implements several policy forms to achieve autonomic behaviors in the network equipments. In ANEMA, the high-level objectives of the human administrators and the users are captured and expressed in terms of ‘Utility Function’ policies. The ‘Goal’ policies describe the high-level management directives needed to guide the network to achieve the previous utility functions. Finally, the ‘behavioral’ policies describe the behaviors that should be followed by network equipments to react to changes in their context and to achieve the given ‘Goal’ policies. In order to highlight the benefits of ANEMA architecture and the continuum of policies to introduce autonomic management in a multiservice IP network, a testbed has been implemented and several scenarios have been executed. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f9b110890c90d48b6d2f84aa419c1598", "text": "Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.", "title": "" }, { "docid": "2871d80088d7cabd0cd5bdd5101e6018", "text": "Owing to superior physical properties such as high electron saturation velocity and high electric breakdown field, GaN-based high electron mobility transistors (HEMTs) are capable of delivering superior performance in microwave amplifiers, high power switches, and high temperature integrated circuits (ICs). Compared to the conventional D-mode HEMTs with negative threshold voltages, enhancement-mode (E-mode) or normally-off HEMTs are desirable in these applications, for reduced circuit design complexity and fail-safe operation. Fluorine plasma treatment has been used to fabricate E-mode HEMTs [1], and is a robust process for the channel threshold voltage modulation. However, there is no standard equipment for this process and various groups have reported a wide range of process parameters [1–4]. In this work, we demonstrate the self-aligned enhancement-mode AlGaN/GaN HEMTs fabricated with a standard fluorine ion implantation. Ion implantation is widely used in semiconductor industry with well-controlled dose and precise implantation profile.", "title": "" }, { "docid": "c62cc1b0a9c1c4cadede943b4cbd8050", "text": "The problem of parsing has been studied extensively for various formal grammars. Given an input string and a grammar, the parsing problem is to check if the input string belongs to the language generated by the grammar. A closely related problem of great importance is one where the input are a string I and a grammar G and the task is to produce a string I ′ that belongs to the language generated by G and the ‘distance’ between I and I ′ is the smallest (from among all the strings in the language). Specifically, if I is in the language generated by G, then the output should be I. Any parser that solves this version of the problem is called an error correcting parser. In 1972 Aho and Peterson presented a cubic time error correcting parser for context free grammars. Since then this asymptotic time bound has not been improved under the (standard) assumption that the grammar size is a constant. In this paper we present an error correcting parser for context free grammars that runs in O(T (n)) time, where n is the length of the input string and T (n) is the time needed to compute the tropical product of two n× n matrices. In this paper we also present an n M -approximation algorithm for the language edit distance problem that has a run time of O(Mnω), where O(nω) is the time taken to multiply two n× n matrices. To the best of our knowledge, no approximation algorithms have been proposed for error correcting parsing for general context free grammars.", "title": "" }, { "docid": "d64d589068d68ef19d7ac77ab55c8318", "text": "Cloud computing is a revolutionary paradigm to deliver computing resources, ranging from data storage/processing to software, as a service over the network, with the benefits of efficient resource utilization and improved manageability. The current popular cloud computing models encompass a cluster of expensive and dedicated machines to provide cloud computing services, incurring significant investment in capital outlay and ongoing costs. A more cost effective solution would be to exploit the capabilities of an ad hoc cloud which consists of a cloud of distributed and dynamically untapped local resources. The ad hoc cloud can be further classified into static and mobile clouds: an ad hoc static cloud harnesses the underutilized computing resources of general purpose machines, whereas an ad hoc mobile cloud harnesses the idle computing resources of mobile devices. However, the dynamic and distributed characteristics of ad hoc cloud introduce challenges in system management. In this article, we propose a generic em autonomic mobile cloud (AMCloud) management framework for automatic and efficient service/resource management of ad hoc cloud in both static and mobile modes. We then discuss in detail the possible security and privacy issues in ad hoc cloud computing. A general security architecture is developed to facilitate the study of prevention and defense approaches toward a secure autonomic cloud system. This article is expected to be useful for exploring future research activities to achieve an autonomic and secure ad hoc cloud computing system.", "title": "" } ]
scidocsrr
92217306dcd4a413e3f60d0523ef15f5
The Controversy Surrounding The Man Who Would Be Queen: A Case History of the Politics of Science, Identity, and Sex in the Internet Age
[ { "docid": "34cab0c02d5f5ec5183bd63c01f932c7", "text": "Autogynephilia is defined as a male’s propensity to be sexually aroused by the thought or image of himself as female. Autogynephilia explains the desire for sex reassignment of some maleto-female (MtF) transsexuals. It can be conceptualized as both a paraphilia and a sexual orientation. The concept of autogynephilia provides an alternative to the traditional model of transsexualism that emphasizes gender identity. Autogynephilia helps explain mid-life MtF gender transition, progression from transvestism to transsexualism, the prevalence of other paraphilias among MtF transsexuals, and late development of sexual interest in male partners. Hormone therapy and sex reassignment surgery can be effective treatments in autogynephilic transsexualism. The concept of autogynephilia can help clinicians better understand MtF transsexual clients who recognize a strong sexual component to their gender dysphoria. (Journal of Gay & Lesbian Psychotherapy, 8(1/2), 2004, pp. 69-87.)", "title": "" } ]
[ { "docid": "18f2e2a5e1b4d51a0a05c559a11a023e", "text": "A novel forward coupler using coupled composite right/left-handed (CRLH) transmission lines (TLs) is presented. Forward coupling is enhanced by the CRLH TLs, which have a considerable difference between the effective phase constants in the even and odd modes. A 10-dB forward coupler using the coupled CRLH TLs is simulated and experimentally demonstrated in the S-band. Its coupled-line length is reduced to half that of the conventional right-handed forward coupler with the same coupling.", "title": "" }, { "docid": "63ddab85be58aa2b9576d9b540ac31ed", "text": "BACKGROUND\nThe objective of this study was to translate and to test the reliability and validity of the 12-item General Health Questionnaire (GHQ-12) in Iran.\n\n\nMETHODS\nUsing a standard 'forward-backward' translation procedure, the English language version of the questionnaire was translated into Persian (Iranian language). Then a sample of young people aged 18 to 25 years old completed the questionnaire. In addition, a short questionnaire containing demographic questions and a single measure of global quality of life was administered. To test reliability the internal consistency was assessed by Cronbach's alpha coefficient. Validity was performed using convergent validity. Finally, the factor structure of the questionnaire was extracted by performing principal component analysis using oblique factor solution.\n\n\nRESULTS\nIn all 748 young people entered into the study. The mean age of respondents was 21.1 (SD = 2.1) years. Employing the recommended method of scoring (ranging from 0 to 12), the mean GHQ score was 3.7 (SD = 3.5). Reliability analysis showed satisfactory result (Cronbach's alpha coefficient = 0.87). Convergent validity indicated a significant negative correlation between the GHQ-12 and global quality of life scores as expected (r = -0.56, P < 0.0001). The principal component analysis with oblique rotation solution showed that the GHQ-12 was a measure of psychological morbidity with two-factor structure that jointly accounted for 51% of the variance.\n\n\nCONCLUSION\nThe study findings showed that the Iranian version of the GHQ-12 has a good structural characteristic and is a reliable and valid instrument that can be used for measuring psychological well being in Iran.", "title": "" }, { "docid": "f702a8c28184a6d49cd2f29a1e4e7ea4", "text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.", "title": "" }, { "docid": "9122fa8d5332e98a012e1ede2f12b6cc", "text": "Ghana’s banking system has experienced interesting developments in the past two decades. Products such as international funds transfer, consumer/hire purchase loan and travelers’ cheque, personal computer banking, telephone banking, internet banking, branchless banking, SMS banking have been developed (Abor, 2005). Automated teller machines (ATMs) have become common, giving clients the freedom to transact business at their own convenience (Abor, 2005; Hinson, Amidu and Ensah, 2006). The development of these products has brought fierce competition within the banking industry; as a result, the financial sector has to rethink the way business is carried out, because of this competitive edge. Such competitive edge is driven by business and technological factors especially improvement in telecommunication networks and advancement in computer technology in Ghana (Hinson, Amidu and Ensah, 2006). In business today, the power balance has shifted from supply to demand push. Technological factors; especially developments in information technology are as much a cause as an effect of the transformation to new ways of doing business (Beulen, Ribbers & Roos, 2006). As a result of these developments, traditional value chains are being unbundled (Parker, 1999). One may ask whether such contemporary ways are evident in the Ghanaian banking sector as well? Commercial banks in Ghana are no exception to these changing business trends. Consequently, outsourcing of services has now become paramount to banks in Ghana. IT outsourcing is a major part of outsourcing decisions in commercial banks in Ghana. In Abstract: Ghana’s banking sector is currently faced with swift competition due to the increasing number of players in the market. Over the past ten (10) years, the number of commercial banks in the country has doubled. Banks are faced with the challenge of developing innovative products and services, and also expand rapidly. Facilities management support services are critical to this trend of development. Commercial banks need to make a delicate choice between make or buy of these support services (use-in-house expert or outsource). Unarguably, the need for banks to concentrate on their core business of banking and finance and outsource other non-core services to enhance shareholders wealth cannot be over emphasized. Although outsourcing has gained global recognition, the practice is quite new to commercial banks in Ghana. In recent times, commercial banks have outsourced numerous non-core services such as ICT, janitorial services, security, and even part of bank’s human resources. Whereas outsourcing might come with some comparative advantages for the banks, there are still fears of some uncertainties. Focusing on literature on outsourcing and authors own perspective from the banking sector in Ghana, this paper present the key risks likely to come with outsourcing and what future directions ought to be if such risk are to be reduced to its barest minimum. The paper presents a theoretical framework for outsourcing, a platform for further research on outsourcing and for improvement of knowledge.", "title": "" }, { "docid": "597c6ba95d7bf037983e82d91f6a1b74", "text": "An effective solution of generating OAM-carrying radio beams with three polarizations is provided. Through the reasonable configuration of phased antenna array using elements with three polarizations, the OAM radio waves with three polarizations for different states can be generated. The vectors of electric fields with different OAM states for linear, as well as left or right circular polarizations are presented and analyzed in detail.", "title": "" }, { "docid": "d00df5e0c5990c05d5a67e311586a68a", "text": "The present research explored the controversial link between global self-esteem and externalizing problems such as aggression, antisocial behavior, and delinquency. In three studies, we found a robust relation between low self-esteem and externalizing problems. This relation held for measures of self-esteem and externalizing problems based on self-report, teachers' ratings, and parents' ratings, and for participants from different nationalities (United States and New Zealand) and age groups (adolescents and college students). Moreover, this relation held both cross-sectionally and longitudinally and after controlling for potential confounding variables such as supportive parenting, parent-child and peer relationships, achievement-test scores, socioeconomic status, and IQ. In addition, the effect of self-esteem on aggression was independent of narcissism, an important finding given recent claims that individuals who are narcissistic, not low in self-esteem, are aggressive. Discussion focuses on clarifying the relations among self-esteem, narcissism, and externalizing problems.", "title": "" }, { "docid": "a1eeb5721d13b78abbeb46eac559f58f", "text": "Immersive video offers the freedom to navigate inside virtualized environment. Instead of streaming the bulky immersive videos entirely, a viewport (also referred to as field of view, FoV) adaptive streaming is preferred. We often stream the high-quality content within current viewport, while reducing the quality of representation elsewhere to save the network bandwidth consumption. Consider that we could refine the quality when focusing on a new FoV, in this paper, we model the perceptual impact of the quality variations (through adapting the quantization stepsize and spatial resolution) with respect to the refinement duration, and yield a product of two closed-form exponential functions that well explain the joint quantization and resolution induced quality impact. Analytical model is crossvalidated using another set of data, where both Pearson and Spearman’s rank correlation coefficients are close to 0.98. Our work is devised to optimize the adaptive FoV streaming of the immersive video under limited network resource. Numerical results show that our proposed model significantly improves the quality of experience of users, with about 9.36% BD-Rate (Bjontegaard Delta Rate) improvement on average as compared to other representative methods, particularly under the limited bandwidth.", "title": "" }, { "docid": "0ec337f7af66ede2a97ade80ce27c131", "text": "The processing time required by a cryptographic primitive implemented in hardware is an important metric for its performance but it has not received much attention in recent publications on lightweight cryptography. Nevertheless, there are important applications for cost effective low-latency encryption. As the first step in the field, this paper explores the lowlatency behavior of hardware implementations of a set of block ciphers. The latency of the implementations is investigated as well as the trade-offs with other metrics such as circuit area, time-area product, power, and energy consumption. The obtained results are related back to the properties of the underlying cipher algorithm and, as it turns out, the number of rounds, their complexity, and the similarity of encryption and decryption procedures have a strong impact on the results. We provide a qualitative description and conclude with a set of recommendations for aspiring low-latency block cipher designers.", "title": "" }, { "docid": "94e2bfa218791199a59037f9ea882487", "text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.", "title": "" }, { "docid": "07631274713ad80653552767d2fe461c", "text": "Life cycle assessment (LCA) methodology was used to determine the optimum municipal solid waste (MSW) management strategy for Eskisehir city. Eskisehir is one of the developing cities of Turkey where a total of approximately 750tons/day of waste is generated. An effective MSW management system is needed in this city since the generated MSW is dumped in an unregulated dumping site that has no liner, no biogas capture, etc. Therefore, five different scenarios were developed as alternatives to the current waste management system. Collection and transportation of waste, a material recovery facility (MRF), recycling, composting, incineration and landfilling processes were considered in these scenarios. SimaPro7 libraries were used to obtain background data for the life cycle inventory. One ton of municipal solid waste of Eskisehir was selected as the functional unit. The alternative scenarios were compared through the CML 2000 method and these comparisons were carried out from the abiotic depletion, global warming, human toxicity, acidification, eutrophication and photochemical ozone depletion points of view. According to the comparisons and sensitivity analysis, composting scenario, S3, is the more environmentally preferable alternative. In this study waste management alternatives were investigated only on an environmental point of view. For that reason, it might be supported with other decision-making tools that consider the economic and social effects of solid waste management.", "title": "" }, { "docid": "bfe5c10940d4cccfb071598ed04020ac", "text": "BACKGROUND\nKnowledge about quality of life and sexual health in patients with genital psoriasis is limited.\n\n\nOBJECTIVES\nWe studied quality of life and sexual function in a large group of patients with genital psoriasis by means of validated questionnaires. In addition, we evaluated whether sufficient attention is given by healthcare professionals to sexual problems in patients with psoriasis, as perceived by the patients.\n\n\nMETHODS\nA self-administered questionnaire was sent to 1579 members of the Dutch Psoriasis Association. Sociodemographic patient characteristics, medical data and scores of several validated questionnaires regarding quality of life (Dermatology Life Quality Index) and sexual health (Sexual Quality of Life Questionnaire for use in Men, International Index of Erectile Function, Female Sexual Distress Scale and Female Sexual Function Index) were collected and analysed.\n\n\nRESULTS\nThis study (n = 487) shows that psoriasis has a detrimental effect on quality of life and sexual health. Patients with genital lesions reported even significantly worse quality of life than patients without genital lesions (mean ± SD quality of life scores 8·5 ± 6·5 vs. 5·5 ± 4·6, respectively, P < 0·0001). Sexual distress and dysfunction are particularly prominent in women (reported by 37·7% and 48·7% of the female patients, respectively). Sexual distress is especially high when genital skin is affected (mean ± SD sexual distress score in patients with genital lesions 16·1 ± 12·1 vs. 10·1 ± 9·7 in patients without genital lesions, P = 0·001). The attention given to possible sexual problems in the psoriasis population by healthcare professionals is perceived as insufficient by patients.\n\n\nCONCLUSIONS\nIn addition to quality of life, sexual health is diminished in a considerable number of patients with psoriasis and particularly women with genital lesions have on average high levels of sexual distress. We underscore the need for physicians to pay attention to the impact of psoriasis on psychosocial and sexual health when treating patients for this skin disease.", "title": "" }, { "docid": "647ff27223a27396ffc15c24c5ff7ef1", "text": "Mobile phones are increasingly used for security sensitive activities such as online banking or mobile payments. This usually involves some cryptographic operations, and therefore introduces the problem of securely storing the corresponding keys on the phone. In this paper we evaluate the security provided by various options for secure storage of key material on Android, using either Android's service for key storage or the key storage solution in the Bouncy Castle library. The security provided by the key storage service of the Android OS depends on the actual phone, as it may or may not make use of ARM TrustZone features. Therefore we investigate this for different models of phones.\n We find that the hardware-backed version of the Android OS service does offer device binding -- i.e. keys cannot be exported from the device -- though they could be used by any attacker with root access. This last limitation is not surprising, as it is a fundamental limitation of any secure storage service offered from the TrustZone's secure world to the insecure world. Still, some of Android's documentation is a bit misleading here.\n Somewhat to our surprise, we find that in some respects the software-only solution of Bouncy Castle is stronger than the Android OS service using TrustZone's capabilities, in that it can incorporate a user-supplied password to secure access to keys and thus guarantee user consent.", "title": "" }, { "docid": "0e56318633147375a1058a6e6803e768", "text": "150/150). Large-scale distributed analyses of over 30,000 MRI scans recently detected common genetic variants associated with the volumes of subcortical brain structures. Scaling up these efforts, still greater computational challenges arise in screening the genome for statistical associations at each voxel in the brain, localizing effects using “image-wide genome-wide” testing (voxelwise GWAS, vGWAS). Here we benefit from distributed computations at multiple sites to meta-analyze genome-wide image-wide data, allowing private genomic data to stay at the site where it was collected. Site-specific tensorbased morphometry (TBM) is performed with a custom template for each site, using a multi channel registration. A single vGWAS testing 10 variants against 2 million voxels can yield hundreds of TB of summary statistics, which would need to be transferred and pooled for meta-analysis. We propose a 2-step method, which reduces data transfer for each site to a subset of SNPs and voxels guaranteed to contain all significant hits.", "title": "" }, { "docid": "34cc70a2acf5680442f0511c50215d25", "text": "Machine Learning has traditionally focused on narrow artificial intelligence solutions for specific problems. Despite this, we observe two trends in the state-of-the-art: One, increasing architectural homogeneity in algorithms and models. Two, algorithms having more general application: New techniques often beat many benchmarks simultaneously. We review the changes responsible for these trends and look to computational neuroscience literature to anticipate future progress.", "title": "" }, { "docid": "12fe6e1217fb269eb2b7f93e76a35134", "text": "In this paper, we propose to extend the recently introduced model-agnostic meta-learning algorithm (MAML, Finn et al., 2017) for lowresource neural machine translation (NMT). We frame low-resource translation as a metalearning problem, and we learn to adapt to low-resource languages based on multilingual high-resource language tasks. We use the universal lexical representation (Gu et al., 2018b) to overcome the input-output mismatch across different languages. We evaluate the proposed meta-learning strategy using eighteen European languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt, Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and five diverse languages (Ro, Lv, Fi, Tr and Ko) as target tasks. We show that the proposed approach significantly outperforms the multilingual, transfer learning based approach (Zoph et al., 2016) and enables us to train a competitive NMT system with only a fraction of training examples. For instance, the proposed approach can achieve as high as 22.04 BLEU on Romanian-English WMT’16 by seeing only 16,000 translated words (⇠ 600 parallel sentences).", "title": "" }, { "docid": "52bf46e7c0449a274c33765586a2e9a1", "text": "A stand-alone direction finding RFID reader is developed for mobile robot applications employing a dual-directional antenna. By adding search and localization capabilities to the current state of RFID technology, robots will be able to acquire and dock to a static target in a real environment without requiring a map or landmarks. Furthermore, we demonstrate RFID-enabled tracking and following of a target moving unpredictably with a mobile robot. The RFID reader keeps the robot aware of the direction of arrival (DOA) of the signal of interest toward which the dual-directional antenna faces the target transponder. The simulation results show that the proposed RFID system can track in real time the movement of the target transponder. To verify the effectiveness of the system in a real environment, we perform a variety of experiments in a hallway including target tracking and following with a commercial mobile robot.", "title": "" }, { "docid": "216f97a97d240456d36ec765fd45739e", "text": "This paper explores the growing trend of using mobile technology in university classrooms, exploring the use of tablets in particular, to identify learning benefits faced by students. Students, acting on their efficacy beliefs, make decisions regarding technology’s influence in improving their education. We construct a theoretical model in which internal and external factors affect a student’s self-efficacy which in turn affects the extent of adoption of a device for educational purposes. Through qualitative survey responses of university students who were given an Apple iPad to keep for the duration of a university course we find high levels of self-efficacy leading to positive views of the technology’s learning enhancement capabilities. Student observations on the practicality of the technology, off-topic use and its effects, communication, content, and perceived market advantage of using a tablet are also explored.", "title": "" }, { "docid": "0c6b1a6b8c3b421821b49a31e39943db", "text": "This paper proposes an ignition system for real time detection of driver’s face recognition, finger print authentication as well as alcohol intoxication and subsequently alerting them. The main aim of this proposed system is to reduce the number of accidents due to driver’s drowsiness and alcohol intake to increase the transportation safety as well as protect the vehicle from theft. This proposed system contains 8-megapixels digital USB camera, Raspberry-pi loaded. Face detection is the important part of this project will be done using Open CV. [2] [3].", "title": "" }, { "docid": "5c83df8ba41b37d86f46de7963798b2f", "text": "Experiments show a primary role of extracellular potassium concentrations in neuronal hyperexcitability and in the generation of epileptiform bursting and depolarization blocks without synaptic mechanisms. We adopt a physiologically relevant hippocampal CA1 neuron model in a zero-calcium condition to better understand the function of extracellular potassium in neuronal seizurelike activities. The model neuron is surrounded by interstitial space in which potassium ions are able to accumulate. Potassium currents, Na{+}-K{+} pumps, glial buffering, and ion diffusion are regulatory mechanisms of extracellular potassium. We also consider a reduced model with a fixed potassium concentration. The bifurcation structure and spiking frequency of the two models are studied. We show that, besides hyperexcitability and bursting pattern modulation, the potassium dynamics can induce not only bistability but also tristability of different firing patterns. Our results reveal the emergence of the complex behavior of multistability due to the dynamical [K{+}]{o} modulation on neuronal activities.", "title": "" }, { "docid": "e5b2857bfe745468453ef9dabbf5c527", "text": "We assume that a high-dimensional datum, like an image, is a compositional expression of a set of properties, with a complicated non-linear relationship between the datum and its properties. This paper proposes a factorial mixture prior for capturing latent properties, thereby adding structured compositionality to deep generative models. The prior treats a latent vector as belonging to Cartesian product of subspaces, each of which is quantized separately with a Gaussian mixture model. Some mixture components can be set to represent properties as observed random variables whenever labeled properties are present. Through a combination of stochastic variational inference and gradient descent, a method for learning how to infer discrete properties in an unsupervised or semi-supervised way is outlined and empirically evaluated.", "title": "" } ]
scidocsrr
dfaf59d510939e2ca49406707845a628
Building a Book Recommender system using time based content filtering
[ { "docid": "13b887760a87bc1db53b16eb4fba2a01", "text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.", "title": "" } ]
[ { "docid": "c4183c8b08da8d502d84a650d804cac8", "text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>", "title": "" }, { "docid": "48c851b54fb489cea937cdfac3ca8132", "text": "This paper describes a new system, dubbed Continuous Appearance-based Trajectory SLAM (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop closure filtering uses a probabilistic distribution of possible loop closures along the robot’s previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao-Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM to FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM.", "title": "" }, { "docid": "be989252cdad4886613f53c7831454cb", "text": "Stress and cortisol are known to impair memory retrieval of well-consolidated declarative material. The effects of cortisol on memory retrieval may in particular be due to glucocorticoid (GC) receptors in the hippocampus and prefrontal cortex (PFC). Therefore, effects of stress and cortisol should be observable on both hippocampal-dependent declarative memory retrieval and PFC-dependent working memory (WM). In the present study, it was tested whether psychosocial stress would impair both WM and memory retrieval in 20 young healthy men. In addition, the association between cortisol levels and cognitive performance was assessed. It was found that stress impaired WM at high loads, but not at low loads in a Sternberg paradigm. High cortisol levels at the time of testing were associated with slow WM performance at high loads, and with impaired recall of moderately emotional, but not of highly emotional paragraphs. Furthermore, performance at high WM loads was associated with memory retrieval. These data extend previous results of pharmacological studies in finding WM impairments after acute stress at high workloads and cortisol-related retrieval impairments.", "title": "" }, { "docid": "b324860905b6d8c4b4a8429d53f2543d", "text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.", "title": "" }, { "docid": "48e26039d9b2e4ed3cfdbc0d3ba3f1d0", "text": "This paper presents a trajectory generator and an active compliance control scheme, unified in a framework to synthesize dynamic, feasible and compliant trot-walking locomotion cycles for a stiff-by-nature hydraulically actuated quadruped robot. At the outset, a CoP-based trajectory generator that is constructed using an analytical solution is implemented to obtain feasible and dynamically balanced motion references in a systematic manner. Initial conditions are uniquely determined for symmetrical motion patterns, enforcing that trajectories are seamlessly connected both in position, velocity and acceleration levels, regardless of the given support phase. The active compliance controller, used simultaneously, is responsible for sufficient joint position/force regulation. An admittance block is utilized to compute joint displacements that correspond to joint force errors. In addition to position feedback, these joint displacements are inserted to the position control loop as a secondary feedback term. In doing so, active compliance control is achieved, while the position/force trade-off is modulated via the virtual admittance parameters. Various trot-walking experiments are conducted with the proposed framework using HyQ, a ~ 75kg hydraulically actuated quadruped robot. We present results of repetitive, continuous, and dynamically equilibrated trot-walking locomotion cycles, both on level surface and uneven surface walking experiments.", "title": "" }, { "docid": "c51429c718321a76168c4e3ed4303551", "text": "Available online 17 March 2011", "title": "" }, { "docid": "d146e6c29d99b113782580c16de9b013", "text": "Using a dictionary to map independently trained word embeddings to a shared space has shown to be an effective approach to learn bilingual word embeddings. In this work, we propose a multi-step framework of linear transformations that generalizes a substantial body of previous work. The core step of the framework is an orthogonal transformation, and existing methods can be explained in terms of the additional normalization, whitening, re-weighting, de-whitening and dimensionality reduction steps. This allows us to gain new insights into the behavior of existing methods, including the effectiveness of inverse regression, and design a novel variant that obtains the best published results in zero-shot bilingual lexicon extraction. The corresponding software is released as an open source project.", "title": "" }, { "docid": "2ac9b0d68c4147a6a4def86184e292c8", "text": "In this paper we explore the application of travel-speed prediction to query processing in Moving Objects Databases. We propose to revise the motion plans of moving objects using the predicted travel-speeds. This revision occurs before answering queries. We develop three methods of doing this. These methods differ in the time when the motion plans are revised, and which of them are revised. We analyze the three methods theoretically and experimentally.", "title": "" }, { "docid": "1f1fd7217ed5bae04f9ac6f8ccc8c23f", "text": "Relating the brain's structural connectivity (SC) to its functional connectivity (FC) is a fundamental goal in neuroscience because it is capable of aiding our understanding of how the relatively fixed SC architecture underlies human cognition and diverse behaviors. With the aid of current noninvasive imaging technologies (e.g., structural MRI, diffusion MRI, and functional MRI) and graph theory methods, researchers have modeled the human brain as a complex network of interacting neuronal elements and characterized the underlying structural and functional connectivity patterns that support diverse cognitive functions. Specifically, research has demonstrated a tight SC-FC coupling, not only in interregional connectivity strength but also in network topologic organizations, such as community, rich-club, and motifs. Moreover, this SC-FC coupling exhibits significant changes in normal development and neuropsychiatric disorders, such as schizophrenia and epilepsy. This review summarizes recent progress regarding the SC-FC relationship of the human brain and emphasizes the important role of large-scale brain networks in the understanding of structural-functional associations. Future research directions related to this topic are also proposed.", "title": "" }, { "docid": "5aa8fb560e7d5c2621054da97c30ffec", "text": "PURPOSE\nThe aim of this meta-analysis was to evaluate different methods for guided bone regeneration using collagen membranes and particulate grafting materials in implant dentistry.\n\n\nMATERIALS AND METHODS\nAn electronic database search and hand search were performed for all relevant articles dealing with guided bone regeneration in implant dentistry published between 1980 and 2014. Only randomized clinical trials and prospective controlled studies were included. The primary outcomes of interest were survival rates, membrane exposure rates, bone gain/defect reduction, and vertical bone loss at follow-up. A meta-analysis was performed to determine the effects of presence of membrane cross-linking, timing of implant placement, membrane fixation, and decortication.\n\n\nRESULTS\nTwenty studies met the inclusion criteria. Implant survival rates were similar between simultaneous and subsequent implant placement. The membrane exposure rate of cross-linked membranes was approximately 30% higher than that of non-cross-linked membranes. The use of anorganic bovine bone mineral led to sufficient newly regenerated bone and high implant survival rates. Membrane fixation was weakly associated with increased vertical bone gain, and decortication led to higher horizontal bone gain (defect depth).\n\n\nCONCLUSION\nGuided bone regeneration with particulate graft materials and resorbable collagen membranes is an effective technique for lateral alveolar ridge augmentation. Because implant survival rates for simultaneous and subsequent implant placement were similar, simultaneous implant placement is recommended when possible. Additional techniques like membrane fixation and decortication may represent beneficial implications for the practice.", "title": "" }, { "docid": "04fd45380cc99b4b650318c0df7627a6", "text": "Research and development of recommender systems has been a vibrant field for over a decade, having produced proven metho ds for “preference-aware” computing. Recommenders use commu nity opinion histories to help users identify interesting i tems from a considerably large search space (e.g., inventory from Amaz on [7], movies from Netflix [9]). Personalization, recommendation , a d the “human side\" of data-centric applications are even beco ming important topics in the data management community [3]. A popular recommendation method used heavily in practice is collaborative filtering, consisting of two phases: (1) An offline model-buildingphase that uses community opinions of items (e.g., movie ratings, “Diggs” [6]) to build a model storing meaning ful correlations between users and items. (2) An on-demandrecommendationphase that uses the model to produce a set of recommended items when requested from a user or application. To be effective, recommender systems must evolve with their content. In current update-intensive systems (e.g., socia l networks, online news sites), the restriction that a model be generate d offline is a significant drawback, as it hinders the system’s ability to evolve quickly. For instance, new users enter the system cha nging the collective opinions over items, or the system adds ne w items quickly (e.g., news posts, Facebook postings), which w dens the recommendation pool. These updates affect the recommen der model, that in turn affect the system’s recommendation qual ity in terms of providing accurate answers to recommender queries . In such systems, a completely real-time recommendation process is paramount. Unfortunately, most traditional state-of-the -art recommenders are “hand-built\", implemented as custom software notbuilt for a real-time recommendation process [1]. Further, for so me", "title": "" }, { "docid": "0472c8c606024aaf2700dee3ad020c07", "text": "Any discussion on exchange rate movements and forecasting should include explanatory variables from both the current account and the capital account of the balance of payments. In this paper, we include such factors to forecast the value of the Indian rupee vis a vis the US Dollar. Further, factors reflecting political instability and lack of mechanism for enforcement of contracts that can affect both direct foreign investment and also portfolio investment, have been incorporated. The explanatory variables chosen are the 3 month Rupee Dollar futures exchange rate (FX4), NIFTY returns (NIFTYR), Dow Jones Industrial Average returns (DJIAR), Hang Seng returns (HSR), DAX returns (DR), crude oil price (COP), CBOE VIX (CV) and India VIX (IV). To forecast the exchange rate, we have used two different classes of frameworks namely, Artificial Neural Network (ANN) based models and Time Series Econometric models. Multilayer Feed Forward Neural Network (MLFFNN) and Nonlinear Autoregressive models with Exogenous Input (NARX) Neural Network are the approaches that we have used as ANN models. Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential Generalized Autoregressive Conditional Heteroskedastic (EGARCH) techniques are the ones that we have used as Time Series Econometric methods. Within our framework, our results indicate that, although the two different approaches are quite efficient in forecasting the exchange rate, MLFNN and NARX are the most efficient. Journal of Insurance and Financial Management ARTICLE INFO JEL Classification: C22 C45 C63 F31 F47", "title": "" }, { "docid": "2a244146b1cf3433b2e506bdf966e134", "text": "The rate of detection of thyroid nodules and carcinomas has increased with the widespread use of ultrasonography (US), which is the mainstay for the detection and risk stratification of thyroid nodules as well as for providing guidance for their biopsy and nonsurgical treatment. The Korean Society of Thyroid Radiology (KSThR) published their first recommendations for the US-based diagnosis and management of thyroid nodules in 2011. These recommendations have been used as the standard guidelines for the past several years in Korea. Lately, the application of US has been further emphasized for the personalized management of patients with thyroid nodules. The Task Force on Thyroid Nodules of the KSThR has revised the recommendations for the ultrasound diagnosis and imaging-based management of thyroid nodules. The review and recommendations in this report have been based on a comprehensive analysis of the current literature and the consensus of experts.", "title": "" }, { "docid": "ab5cf1d4c03dea07a46587b73235387c", "text": "Image is usually taken for expressing some kinds of emotions or purposes, such as love, celebrating Christmas. There is another better way that combines the image and relevant song to amplify the expression, which has drawn much attention in the social network recently. Hence, the automatic selection of songs should be expected. In this paper, we propose to retrieve semantic relevant songs just by an image query, which is named as the image2song problem. Motivated by the requirements of establishing correlation in semantic/content, we build a semantic-based song retrieval framework, which learns the correlation between image content and lyric words. This model uses a convolutional neural network to generate rich tags from image regions, a recurrent neural network to model lyric, and then establishes correlation via a multi-layer perceptron. To reduce the content gap between image and lyric, we propose to make the lyric modeling focus on the main image content via a tag attention. We collect a dataset from the social-sharing multimodal data to study the proposed problem, which consists of (image, music clip, lyric) triplets. We demonstrate that our proposed model shows noticeable results in the image2song retrieval task and provides suitable songs. Besides, the song2image task is also performed.", "title": "" }, { "docid": "66370e97fba315711708b13e0a1c9600", "text": "Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.", "title": "" }, { "docid": "53d1ddf4809ab735aa61f4059a1a38b1", "text": "In this paper we present a wearable Haptic Feedback Device to convey intuitive motion direction to the user through haptic feedback based on vibrotactile illusions. Vibrotactile illusions occur on the skin when two or more vibrotactile actuators in proximity are actuated in coordinated sequence, causing the user to feel combined sensations, instead of separate ones. By combining these illusions we can produce various sensation patterns that are discernible by the user, thus allowing to convey different information with each pattern. A method to provide information about direction through vibrotactile illusions is introduced on this paper. This method uses a grid of vibrotactile actuators around the arm actuated in coordination. The sensation felt on the skin is consistent with the desired direction of motion, so the desired motion can be intuitively understood. We show that the users can recognize the conveyed direction, and implemented a proof of concept of the proposed method to guide users' elbow flexion/extension motion.", "title": "" }, { "docid": "05062605a55c1cae500fb43af8334c46", "text": "Over the last decade, there has been considerable interest in designing algorithms for processing massive graphs in the data stream model. The original motivation was two-fold: a) in many applications, the dynamic graphs that arise are too large to be stored in the main memory of a single machine and b) considering graph problems yields new insights into the complexity of stream computation. However, the techniques developed in this area are now finding applications in other areas including data structures for dynamic graphs, approximation algorithms, and distributed and parallel computation. We survey the state-of-the-art results; identify general techniques; and highlight some simple algorithms that illustrate basic ideas.", "title": "" }, { "docid": "c2722939dca35be6fd8662c6b77cee1d", "text": "The cost of moving and storing data is still a fundamental concern for computer architects. Inefficient handling of data can be attributed to conventional architectures being oblivious to the nature of the values that these data bits carry. We observe the phenomenon of spatio-value similarity, where data elements that are approximately similar in value exhibit spatial regularity in memory. This is inherent to 1) the data values of real-world applications, and 2) the way we store data structures in memory. We propose the Bunker Cache, a design that maps similar data to the same cache storage location based solely on their memory address, sacrificing some application quality loss for greater efficiency. The Bunker Cache enables performance gains (ranging from 1.08x to 1.19x) via reduced cache misses and energy savings (ranging from 1.18x to 1.39x) via reduced off-chip memory accesses and lower cache storage requirements. The Bunker Cache requires only modest changes to cache indexing hardware, integrating easily into commodity systems.", "title": "" }, { "docid": "d6da3d9b1357c16bb2d9ea46e56fa60f", "text": "The Supervisory Control and Data Acquisition System (SCADA) monitor and control real-time systems. SCADA systems are the backbone of the critical infrastructure, and any compromise in their security can have grave consequences. Therefore, there is a need to have a SCADA testbed for checking vulnerabilities and validating security solutions. In this paper we develop such a SCADA testbed.", "title": "" }, { "docid": "647e3aa7df6379ead9929decb58e0c3d", "text": "We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.", "title": "" } ]
scidocsrr
1418ec82ce97fa32e4b51cf663172f69
Image denoising via adaptive soft-thresholding based on non-local samples
[ { "docid": "c6a44d2313c72e785ae749f667d5453c", "text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.", "title": "" }, { "docid": "db913c6fe42f29496e13aa05a6489c9b", "text": "As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.", "title": "" }, { "docid": "4d9cf5a29ebb1249772ebb6a393c5a4e", "text": "This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-fold. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving the image inverse problem is formulated using JSM under a regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split Bregman-based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring, and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "b5453d9e4385d5a5ff77997ad7e3f4f0", "text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "title": "" } ]
[ { "docid": "fc04f9bd523e3d2ca57ab3a8e730397b", "text": "Interactive, distributed, and embedded systems often behave stochastically, for example, when inputs, message delays, or failures conform to a probability distribution. However, reasoning analytically about the behavior of complex stochastic systems is generally infeasible. While simulations of systems are commonly used in engineering practice, they have not traditionally been used to reason about formal specifications. Statistical model checking (SMC) addresses this weakness by using a simulation-based approach to reason about precise properties specified in a stochastic temporal logic. A specification for a communication system may state that within some time bound, the probability that the number of messages in a queue will be greater than 5 must be less than 0.01. Using SMC, executions of a stochastic system are first sampled, after which statistical techniques are applied to determine whether such a property holds. While the output of sample-based methods are not always correct, statistical inference can quantify the confidence in the result produced. In effect, SMC provides a more widely applicable and scalable alternative to analysis of properties of stochastic systems using numerical and symbolic methods. SMC techniques have been successfully applied to analyze systems with large state spaces in areas such as computer networking, security, and systems biology. In this article, we survey SMC algorithms, techniques, and tools, while emphasizing current limitations and tradeoffs between precision and scalability.", "title": "" }, { "docid": "6b933bbad26efaf65724d0c923330e75", "text": "This paper presents a 138-170 GHz active frequency doubler implemented in a 0.13 μm SiGe BiCMOS technology with a peak output power of 5.6 dBm and peak power-added efficiency of 7.6%. The doubler achieves a peak conversion gain of 4.9 dB and consumes only 36 mW of DC power at peak drive through the use of a push-push frequency doubling stage optimized for low drive power, along with a low-power output buffer. To the best of our knowledge, this doubler achieves the highest output power, efficiency, and fundamental frequency suppression of all D-band and G-band SiGe HBT frequency doublers to date.", "title": "" }, { "docid": "deaa86a5fe696d887140e29d0b2ae22c", "text": "The high prevalence of spinal stenosis results in a large volume of MRI imaging, yet interpretation can be time-consuming with high inter-reader variability even among the most specialized radiologists. In this paper, we develop an efficient methodology to leverage the subject-matter-expertise stored in large-scale archival reporting and image data for a deep-learning approach to fully-automated lumbar spinal stenosis grading. Specifically, we introduce three major contributions: (1) a natural-language-processing scheme to extract level-by-level ground-truth labels from free-text radiology reports for the various types and grades of spinal stenosis (2) accurate vertebral segmentation and disc-level localization using a U-Net architecture combined with a spine-curve fitting method, and (3) a multiinput, multi-task, and multi-class convolutional neural network to perform central canal and foraminal stenosis grading on both axial and sagittal imaging series inputs with the extracted report-derived labels applied to corresponding imaging level segments. This study uses a large dataset of 22796 disc-levels extracted from 4075 patients. We achieve state-ofthe-art performance on lumbar spinal stenosis classification and expect the technique will increase both radiology workflow efficiency and the perceived value of radiology reports for referring clinicians and patients.", "title": "" }, { "docid": "af0b4e07ec7a60d0021e8bddde5e8b92", "text": "Social Network Sites (SNSs) offer a plethora of privacy controls, but users rarely exploit all of these mechanisms, nor do they do so in the same manner. We demonstrate that SNS users instead adhere to one of a small set of distinct privacy management strategies that are partially related to their level of privacy feature awareness. Using advanced Factor Analysis methods on the self-reported privacy behaviors and feature awareness of 308 Facebook users, we extrapolate six distinct privacy management strategies, including: Privacy Maximizers, Selective Sharers, Privacy Balancers, Self-Censors, Time Savers/Consumers, and Privacy Minimalists and six classes of privacy proficiency based on feature awareness, ranging from Novices to Experts. We then cluster users on these dimensions to form six distinct behavioral profiles of privacy management strategies and six awareness profiles for privacy proficiency. We further analyze these privacy profiles to suggest opportunities for training and education, interface redesign, and new approaches for personalized privacy recommendations.", "title": "" }, { "docid": "0fefdbc0dbe68391ccfc912be937f4fc", "text": "Privacy and security are essential requirements in practical biometric systems. In order to prevent the theft of biometric patterns, it is desired to modify them through revocable and non invertible transformations called Cancelable Biometrics. In this paper, we propose an efficient algorithm for generating a Cancelable Iris Biometric based on Sectored Random Projections. Our algorithm can generate a new pattern if the existing one is stolen, retain the original recognition performance and prevent extraction of useful information from the transformed patterns. Our method also addresses some of the drawbacks of existing techniques and is robust to degradations due to eyelids and eyelashes.", "title": "" }, { "docid": "5bd9b0de217f2a537a5fadf99931d149", "text": "A linear programming (LP) method for security dispatch and emergency control calculations on large power systems is presented. The method is reliable, fast, flexible, easy to program, and requires little computer storage. It works directly with the normal power-system variables and limits, and incorporates the usual sparse matrix techniques. An important feature of the method is that it handles multi-segment generator cost curves neatly and efficiently.", "title": "" }, { "docid": "968ea2dcfd30492a81a71be25f16e350", "text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.", "title": "" }, { "docid": "aac5f1bd2459a19c42bb0c48e99e22f0", "text": "This study examined multiple levels of adolescents' interpersonal functioning, including general peer relations (peer crowd affiliations, peer victimization), and qualities of best friendships and romantic relationships as predictors of symptoms of depression and social anxiety. An ethnically diverse sample of 421 adolescents (57% girls; 14 to 19 years) completed measures of peer crowd affiliation, peer victimization, and qualities of best friendships and romantic relationships. Peer crowd affiliations (high and low status), positive qualities in best friendships, and the presence of a dating relationship protected adolescents against feelings of social anxiety, whereas relational victimization and negative interactions in best friendships predicted high social anxiety. In contrast, affiliation with a high-status peer crowd afforded some protection against depressive affect; however, relational victimization and negative qualities of best friendships and romantic relationships predicted depressive symptoms. Some moderating effects for ethnicity were observed. Findings indicate that multiple aspects of adolescents' social relations uniquely contribute to feelings of internal distress. Implications for research and preventive interventions are discussed.", "title": "" }, { "docid": "0cd46ebc56a6f640931ac4a81676968f", "text": "An improved direct torque controlled induction motor drive is reported in this paper. It is established that the conventional direct torque controlled drive has more torque and flux ripples in steady state, which result in poor torque response, acoustic noise and incorrect speed estimations. Hysteresis controllers also make the switching frequency of voltage source inverter a variable quantity. A strategy of variable duty ratio control scheme is proposed to increase switching frequency, and adjust the width of hysteresis bands according to the switching frequency. This technique minimizes torque and current ripples, improves torque response, and reduces switching losses in spite of its simplicity. Simulation results establish the improved performance of the proposed direct torque control method compared to conventional methods.", "title": "" }, { "docid": "3177e9dd683fdc66cbca3bd985f694b1", "text": "Online communities allow millions of people who would never meet in person to interact. People join web-based discussion boards, email lists, and chat rooms for friendship, social support, entertainment, and information on technical, health, and leisure activities [24]. And they do so in droves. One of the earliest networks of online communities, Usenet, had over nine million unique contributors, 250 million messages, and approximately 200,000 active groups in 2003 [27], while the newer MySpace, founded in 2003, attracts a quarter million new members every day [27].", "title": "" }, { "docid": "18216c0745ae3433b3b7f89bb7876a49", "text": "This paper presents research using full body skeletal movements captured using video-based sensor technology developed by Vicon Motion Systems, to train a machine to identify different human emotions. The Vicon system uses a series of 6 cameras to capture lightweight markers placed on various points of the body in 3D space, and digitizes movement into x, y, and z displacement data. Gestural data from five subjects was collected depicting four emotions: sadness, joy, anger, and fear. Experimental results with different machine learning techniques show that automatic classification of this data ranges from 84% to 92% depending on how it is calculated. In order to put these automatic classification results into perspective a user study on the human perception of the same data was conducted with average classification accuracy of 93%.", "title": "" }, { "docid": "695264db0ca1251ab0f63b04d41c68cd", "text": "Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset, particularly on difficult entity examples.", "title": "" }, { "docid": "cbc6986bf415292292b7008ae4d13351", "text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.", "title": "" }, { "docid": "827e9045f932b146a8af66224e114be6", "text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.", "title": "" }, { "docid": "89267dbf693643ea53696c7d545254ea", "text": "Cognitive dissonance theory is applicable to very limited areas of consumer behavior according to the author. Published findings in support of the theory are equivocal; they fail to show that cognitive dissonance is the only possible cause of observed \"dissonance-reducing\" behavior. Experimental evidences are examined and their weaknesses pointed out by the author to justify his position. He also provides suggestions regarding the circumstances under which dissonance reduction may be useful in increasing the repurchase probability of a purchased brand.", "title": "" }, { "docid": "c68633905f8bbb759c71388819e9bfa9", "text": "An additional mechanical mechanism for a passive parallelogram-based exoskeleton arm-support is presented. It consists of several levers and joints and an attached extension coil spring. The additional mechanism has two favourable features. On the one hand it exhibits an almost iso-elastic behaviour whereby the lifting force of the mechanism is constant for a wide working range. Secondly, the value of the supporting force can be varied by a simple linear movement of a supporting joint. Furthermore a standard tension spring can be used to gain the desired behavior. The additional mechanism is a 4-link mechanism affixed to one end of the spring within the parallelogram arm-support. It has several geometrical parameters which influence the overall behaviour. A standard optimisation routine with constraints on the parameters is used to find an optimal set of geometrical parameters. Based on the optimized geometrical parameters a prototype was constructed and tested. It is a lightweight wearable system, with a weight of 1.9 kg. Detailed experiments reveal a difference between measured and calculated forces. These variations can be explained by a 60 % higher pre load force of the tension spring and a geometrical offset in the construction.", "title": "" }, { "docid": "70ba0f4938630e07d9b145216a01177a", "text": "For some decades radiation therapy has been proved successful in cancer treatment. It is the major task of clinical radiation treatment planning to realise on the one hand a high level dose of radiation in the cancer tissue in order to obtain maximum tumour control. On the other hand it is obvious that it is absolutely necessary to keep in the tissue outside the tumour, particularly in organs at risk, the unavoidable radiation as low as possible. No doubt, these two objectives of treatment planning – high level dose in the tumour, low radiation outside the tumour – have a basically contradictory nature. Therefore, it is no surprise that inverse mathematical models with dose distribution bounds tend to be infeasible in most cases. Thus, there is need for approximations compromising between overdosing the organs at risk and underdosing the target volume. Differing from the currently used time consuming iterative approach, which measures deviation from an ideal (non-achievable) treatment plan using recursively trial-and-error weights for the organs of interest, we go a new way trying to avoid a priori weight choices and consider the treatment planning problem as a multiple objective linear programming problem: with each organ of interest, target tissue as well as organs at risk, we associate an objective function measuring the maximal deviation from the prescribed doses. We build up a data base of relatively few efficient solutions representing and approximating the variety of Pareto solutions of the multiple objective linear programming problem. This data base can be easily scanned by physicians looking for an adequate treatment plan with the aid of an appropriate online tool. 1 The inverse radiation treatment problem – an introduction Every year, in Germany about 450.000 individuals are diagnosed with life-threatening forms of cancer. About 60% of these patients are treated with radiation; half of them are considered curable because their tumours are localised and susceptible to radiation. Nevertheless, despite the use of the best radiation therapy methods available, one third of these “curable” patients – nearly 40.000 people each year – die with primary tumours still active at the original site. Why does this occur ? Experts in the field have looked at the reasons for these failures and have concluded that radiation therapy planning – in particular in complicated anatomical situations – is often inadequate, providing either too little radiation to the tumour or too much radiation to nearby healthy tissue. Effective radiation therapy planning for treating malignent tumours is always a tightrope walk between ineffective underdose of tumour tissue – the target volume – and dangerous overdose of organs at risk being relevant for maintaining life quality of the cured patient. Therefore, it is the challenging task of a radiation therapy planner to realise a certain high dose level conform to the shape of the target volume in order to have a good prognosis for tumour control and to avoid overdose in relevant healthy tissue nearby. Part of this challenge is the computer aided representation of the relevant parts of the body. Modern scanning methods like computer tomography (CT), magnetic resonance tomography 1 on sabbatical leave at the Department of Engineering Science, University of Auckland, New Zealand", "title": "" }, { "docid": "f5b02bdd74772ff2454a475e44077c8e", "text": "This paper presents a new method - adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems. Inspired by generative adversarial networks (GAN), we train a discriminator to differentiate responses/actions generated by dialogue agents from responses/actions by experts. Then, we incorporate the discriminator as another critic into the advantage actor-critic (A2C) framework, to encourage the dialogue agent to explore state-action within the regions where the agent takes actions similar to those of the experts. Experimental results in a movie-ticket booking domain show that the proposed Adversarial A2C can accelerate policy exploration efficiently.", "title": "" }, { "docid": "5a2c04519e5e810daed299140a0c398c", "text": "Satisfying stringent customer requirement of visually detectable solder joint termination for high reliability applications requires the implementation of robust wettable flank strategies. One strategy involves the exposition of the sidewall via partial-cut singulation, where the exposed surface could be made wettable through tin (Sn) electroplating process. Herein, we report our systematic approach in evaluating the viability of mechanical partial-cut singulation process to produce Sn-plateable sidewalls, enabling the wettable flank technology using an automotive QFN package are technology carrier. Optimization DOE produced robust set of parameters showing that mechanical partial cut is a promising solution to produce sidewalls appropriate for Sn electroplating, synergistically yielding excellent wettable flanks.", "title": "" } ]
scidocsrr
531974f421584766e295f4ac0934b198
Predicting User-Topic Opinions in Twitter with Social and Topical Context
[ { "docid": "215ccfeaf75d443e8eb6ead8172c9b92", "text": "Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to low-rank approximations and standard factor models. MMMF can be formulated as a semi-definite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices of dimensionality up to a few hundred. Here, we investigate a direct gradient-based optimization method for MMMF and demonstrate it on large collaborative prediction problems. We compare against results obtained by Marlin (2004) and find that MMMF substantially outperforms all nine methods he tested.", "title": "" } ]
[ { "docid": "6b7ba7008f66526620d53b405fee2b72", "text": "During endoscopic sinus surgery, to perform complex surgical procedures in the delicate nasal cavity, endoscope holding devices are needed. This paper presents a novel robotic system with safe locking and easy to release system that assists surgeons in holding the endoscope. The presented robotic system has eleven degrees of freedom (DOF). The robot consists of an elevator mechanism, a positioning arm and a fine adjustment device. The positioning arm which adopts a negatively actuated air-locking system, consist of three modular rotational joints and two modular ball joints. Since the positioning arm has a structure such that it normally assumes a locked state, and an unlocked state only when necessary, the locked state of the arm can be maintained even if any trouble occurs, causing no falling of the whole robot. The mechanical models of the modular rotational joints and modular ball joints have been derived to guide the detailed design of the prototype of the robot. The workspace of the robot has also been analyzed. Preliminary experiments with manikin have been conducted to demonstrate the performance of the robot.", "title": "" }, { "docid": "c51462988ce97a93da02e00af075127b", "text": "By using mirror reflections of a scene, stereo images can be captured with a single camera (catadioptric stereo). In addition to simplifying data acquisition single camera stereo provides both geometric and radiometric advantages over traditional two camera stereo. In this paper, we discuss the geometry and calibration of catadioptric stereo with two planar mirrors. In particular, we will show that the relative orientation of a catadioptric stereo rig is restricted to the class of planar motions thus reducing the number of external calibration parameters from 6 to 5. Next we derive the epipolar geometry for catadioptric stereo and show that it has 6 degrees of freedom rather than 7 for traditional stereo. Furthermore, we show how focal length can be recovered from a single catadioptric image solely from a set of stereo correspondences. To test the accuracy of the calibration we present a comparison to Tsai camera calibration and we measure the quality of Euclidean reconstruction. In addition, we will describe a real-time system which demonstrates the viability of stereo with mirrors as an alternative to traditional two camera stereo.", "title": "" }, { "docid": "1a58f72cd0f6e979a72dbc233e8c4d4a", "text": "The revolution of genome sequencing is continuing after the successful second-generation sequencing (SGS) technology. The third-generation sequencing (TGS) technology, led by Pacific Biosciences (PacBio), is progressing rapidly, moving from a technology once only capable of providing data for small genome analysis, or for performing targeted screening, to one that promises high quality de novo assembly and structural variation detection for human-sized genomes. In 2014, the MinION, the first commercial sequencer using nanopore technology, was released by Oxford Nanopore Technologies (ONT). MinION identifies DNA bases by measuring the changes in electrical conductivity generated as DNA strands pass through a biological pore. Its portability, affordability, and speed in data production makes it suitable for real-time applications, the release of the long read sequencer MinION has thus generated much excitement and interest in the genomics community. While de novo genome assemblies can be cheaply produced from SGS data, assembly continuity is often relatively poor, due to the limited ability of short reads to handle long repeats. Assembly quality can be greatly improved by using TGS long reads, since repetitive regions can be easily expanded into using longer sequencing lengths, despite having higher error rates at the base level. The potential of nanopore sequencing has been demonstrated by various studies in genome surveillance at locations where rapid and reliable sequencing is needed, but where resources are limited.", "title": "" }, { "docid": "540063344df0b56fcc99bf8572e5e4d2", "text": "Groups play an essential role in many social websites which promote users' interactions and accelerate the diffusion of information. Recommending groups that users are really interested to join is significant for both users and social media. While traditional group recommendation problem has been extensively studied, we focus on a new type of the problem, i.e., event-based group recommendation. Unlike the other forms of groups, users join this type of groups mainly for participating offline events organized by group members or inviting other users to attend events sponsored by them. These characteristics determine that previously proposed approaches for group recommendation cannot be adapted to the new problem easily as they ignore the geographical influence and other explicit features of groups and users.\n In this paper, we propose a method called Pairwise Tag enhAnced and featuRe-based Matrix factorIzation for Group recommendAtioN (PTARMIGAN), which considers location features, social features, and implicit patterns simultaneously in a unified model. More specifically, we exploit matrix factorization to model interactions between users and groups. Meanwhile, we incorporate their profile information into pairwise enhanced latent factors respectively. We also utilize the linear model to capture explicit features. Due to the reinforcement between explicit features and implicit patterns, our approach can provide better group recommendations. We conducted a comprehensive performance evaluation on real word data sets and the experimental results demonstrate the effectiveness of our method.", "title": "" }, { "docid": "33a9140fb57200a489b9150d39f0ab65", "text": "In this paper, a double-quadrant state-of-charge (SoC)-based droop control method for distributed energy storage system is proposed to reach the proper power distribution in autonomous dc microgrids. In order to prolong the lifetime of the energy storage units (ESUs) and avoid the overuse of a certain unit, the SoC of each unit should be balanced and the injected/output power should be gradually equalized. Droop control as a decentralized approach is used as the basis of the power sharing method for distributed energy storage units. In the charging process, the droop coefficient is set to be proportional to the nth order of SoC, while in the discharging process, the droop coefficient is set to be inversely proportional to the nth order of SoC. Since the injected/output power is inversely proportional to the droop coefficient, it is obtained that in the charging process the ESU with higher SoC absorbs less power, while the one with lower SoC absorbs more power. Meanwhile, in the discharging process, the ESU with higher SoC delivers more power and the one with lower SoC delivers less power. Hence, SoC balancing and injected/output power equalization can be gradually realized. The exponent n of SoC is employed in the control diagram to regulate the speed of SoC balancing. It is found that with larger exponent n, the balancing speed is higher. MATLAB/simulink model comprised of three ESUs is implemented and the simulation results are shown to verify the proposed approach.", "title": "" }, { "docid": "8f01f446890deb021ed6c6bead0b681a", "text": "Three experiments explored whether conceptual mappings in conventional metaphors are productive, by testing whether the comprehension of novel metaphors was facilitated by first reading conceptually related conventional metaphors. The first experiment, a replication and extension of Keysar et al. [Keysar, B., Shen, Y., Glucksberg, S., Horton, W. (2000). Conventional language: How metaphorical is it? Journal of Memory and Language 43, 576–593] (Experiment 2), found no such facilitation; however, in the second experiment, upon re-designing and improving the stimulus materials, facilitation was demonstrated. In a final experiment, this facilitation was shown to be specific to the conceptual mappings involved. The authors argue that metaphor productivity provides a communicative advantage and that this may be sufficient to explain the clustering of metaphors into families noted by Lakoff and Johnson [Lakoff & Johnson, M. (1980a). The metaphorical structure of the human conceptual system. Cognitive Science 4, 195–208]. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "5365f6f5174c3d211ea562c8a7fa0aab", "text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)", "title": "" }, { "docid": "b7319eb9dcf772c42c250f680c9596c0", "text": "Grasping unknown objects based on real-world visual input is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information, which is a sparse but powerful description of the scene. Based on this representation we generate edge-based and surface-based grasps. The results show that the method generates successful grasps, that the edge and surface information are complementary, and that the method can deal with more complex scenes. We furthermore present a benchmark for visual-based grasping.", "title": "" }, { "docid": "a8d241e45dde35c4223e07c1b4a84a67", "text": "Leishmania spp. are intracellular parasitic protozoa responsible for a group of neglected tropical diseases, endemic in 98 countries around the world, called leishmaniasis. These parasites have a complex digenetic life cycle requiring a susceptible vertebrate host and a permissive insect vector, which allow their transmission. The clinical manifestations associated with leishmaniasis depend on complex interactions between the parasite and the host immune system. Consequently, leishmaniasis can be manifested as a self-healing cutaneous affliction or a visceral pathology, being the last one fatal in 85-90% of untreated cases. As a result of a long host-parasite co-evolutionary process, Leishmania spp. developed different immunomodulatory strategies that are essential for the establishment of infection. Only through deception and manipulation of the immune system, Leishmania spp. can complete its life cycle and survive. The understanding of the mechanisms associated with immune evasion and disease progression is essential for the development of novel therapies and vaccine approaches. Here, we revise how the parasite manipulates cell death and immune responses to survive and thrive in the shadow of the immune system.", "title": "" }, { "docid": "22654d2ed4c921c7bceb22ce9f9dc892", "text": "xv", "title": "" }, { "docid": "366b3d17f49b7460aef5b2255c8dacdd", "text": "We give a theoretical and experimental analysis of the generalization error of cross validation using two natural measures of the problem under consideration. The approximation rate measures the accuracy to which the target function can be ideally approximated as a function of the number of parameters, and thus captures the complexity of the target function with respect to the hypothesis model. The estimation rate measures the deviation between the training and generalization errors as a function of the number of parameters, and thus captures the extent to which the hypothesis model suffers from overfitting. Using these two measures, we give a rigorous and general bound on the error of the simplest form of cross validation. The bound clearly shows the dangers of making the fraction of data saved for testingtoo large or too small. By optimizing the bound with respect to , we then argue that the following qualitative properties of cross-validation behavior should be quite robust to significant changes in the underlying model selection problem: When the target function complexity is small compared to the sample size, the performance of cross validation is relatively insensitive to the choice of . The importance of choosing optimally increases, and the optimal value for decreases, as the target function becomes more complex relative to the sample size. There is nevertheless a single fixed value for that works nearly optimally for a wide range of target function complexity.", "title": "" }, { "docid": "00632bdf7d05bf2365549fa6c59a4ea4", "text": "BACKGROUND\nLabial adhesion is relatively common, but the condition is little known among doctors and parents. The article assesses treatment in the specialist health service.\n\n\nMATERIAL AND METHOD\nThe treatment and course are assessed in 105 girls in the age group 0 – 15 years who were referred to St. Olavs Hospital in the period 2004 – 14.\n\n\nRESULTS\nThe majority of the girls (n = 63) were treated topically with oestrogen cream. In 26 of 51 girls (51 %) for whom the final result is known, the adhesion opened after one treatment. When 1 – 4 oestrogen treatments were administered, the introitus had opened completely in two out of three (65 %). Fewer than half of those who received supplementary surgical treatment achieved permanent opening.\n\n\nINTERPRETATION\nTreatment for labial adhesion had a limited effect in this study. As the literature suggests that the condition results in few symptoms and resolves spontaneously in virtually all girls in puberty, no compelling medical reason exists for opening the adhesion in asymptomatic girls. It is important that doctors are aware of the condition in order to prevent misdiagnosis and to provide parents with adequate information. For parents it is important to know that spontaneous resolution may result in soreness and dysuria. Knowledge of the condition can most likely prevent unnecessary worry.", "title": "" }, { "docid": "33447e2bf55a419dfec2520e9449ef0e", "text": "We present a unified unsupervised statistical model for text normalization. The relationship between standard and non-standard tokens is characterized by a log-linear model, permitting arbitrary features. The weights of these features are trained in a maximumlikelihood framework, employing a novel sequential Monte Carlo training algorithm to overcome the large label space, which would be impractical for traditional dynamic programming solutions. This model is implemented in a normalization system called UNLOL, which achieves the best known results on two normalization datasets, outperforming more complex systems. We use the output of UNLOL to automatically normalize a large corpus of social media text, revealing a set of coherent orthographic styles that underlie online language variation.", "title": "" }, { "docid": "d7eca0ca4da72bca2d74d484e4dec8ce", "text": "Recent studies have shown that the human genome has a haplotype block structure such that it can be divided into discrete blocks of limited haplotype diversity. Patil et al. [6] and Zhang et al. [12] developed algorithms to partition haplotypes into blocks with minimum number of tag SNPs for the entire chromosome. However, it is not clear how to partition haplotypes into blocks with restricted number of SNPs when only limited resources are available. In this paper, we first formulated this problem as finding a block partition with a fixed number of tag SNPs that can cover the maximal percentage of a genome. Then we solved it by two dynamic programming algorithms, which are fairly flexible to take into account the knowledge of functional polymorphism. We applied our algorithms to the published SNP data of human chromosome 21 combining with the functional information of these SNPs and demonstrated the effectiveness of them. Statistical investigation of the relationship between the starting points of a block partition and the coding and non-coding regions illuminated that the SNPs at these starting points are not significantly enriched in coding regions. We also developed an efficient algorithm to find all possible long local maximal haplotypes across a subset of samples. After applying this algorithm to the human chromosome 21 haplotype data, we found that samples with long local haplotypes are not necessarily globally similar.", "title": "" }, { "docid": "7a57ec3dbced731130f71ce571dc19ea", "text": "Wegener's granulomatosis (WG) is a systemic inflammatory disease whose histopathologic features often include necrosis, granuloma formation, and vasculitis of small-to-medium-sized vessels. WG involves many interrelated pathogenic pathways that are genetic, cell-mediated, neutrophil-mediated, humoral, and environmental. WG most commonly involves the upper respiratory tract, lungs, and kidneys, but has been reported to affect almost any organ. Ophthalmologic involvement is an important cause of morbidity in WG patients, occurring in approximately one-half of patients. The presence of unexplained orbital inflammatory disease, scleritis, peripheral ulcerative keratitis, cicatricial conjunctivitis, nasolacrimal duct stenosis, retinal vascular occlusion, or infrequently uveitis should raise the question of possible WG. A thorough clinical examination, laboratory testing, radiologic imaging, and histologic examination are essential to diagnosing WG and excluding potential mimics. Previously a uniformly fatal disease, treatment with cytotoxic and immunosuppressive agents has greatly improved survival. Treatment-related morbidity is a serious limitation of conventional therapies, leading to numerous ongoing studies of alternative agents.", "title": "" }, { "docid": "01cd392f0393a0694d3ffd06c4994e97", "text": "Low-dose computed tomography (LDCT) has offered tremendous benefits in radiation-restricted applications, but the quantum noise as resulted by the insufficient number of photons could potentially harm the diagnostic performance. Current image-based denoising methods tend to produce a blur effect on the final reconstructed results especially in high noise levels. In this paper, a deep learning-based approach was proposed to mitigate this problem. An adversarially trained network and a sharpness detection network were trained to guide the training process. Experiments on both simulated and real dataset show that the results of the proposed method have very small resolution loss and achieves better performance relative to state-of-the-art methods both quantitatively and visually.", "title": "" }, { "docid": "2746379baa4c59fae63dc92a9c8057bc", "text": "Twenty-five Semantic Web and Database researchers met at the 2011 STI Semantic Summit in Riga, Latvia July 6-8, 2011[1] to discuss the opportunities and challenges posed by Big Data for the Semantic Web, Semantic Technologies, and Database communities. The unanimous conclusion was that the greatest shared challenge was not only engineering Big Data, but also doing so meaningfully. The following are four expressions of that challenge from different perspectives.", "title": "" }, { "docid": "e5349bb52db819e1e454e48f4e38868e", "text": "This paper describes a design procedure for a CMOS voltage doubler. Test-bench circuit are used to verify the performance of the design. Several equations that relate performance parameters with design variables are presented. This set of equations considers both transient and steady state behavior. Various known energy losses such as switching and conduction losses were taken into account for transistors sizing. The effects of the characteristics of the pump capacitors are analyzed and evaluated through electrical simulations. A design example based on AMS 0.35μm process is presented.", "title": "" }, { "docid": "fe3a2ef6ffc3e667f73b19f01c14d15a", "text": "The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.", "title": "" }, { "docid": "add36ca538a8ae362c0224acfa020700", "text": "A frustrating aspect of software development is that compiler error messages often fail to locate the actual cause of a syntax error. An errant semicolon or brace can result in many errors reported throughout the file. We seek to find the actual source of these syntax errors by relying on the consistency of software: valid source code is usually repetitive and unsurprising. We exploit this consistency by constructing a simple N-gram language model of lexed source code tokens. We implemented an automatic Java syntax-error locator using the corpus of the project itself and evaluated its performance on mutated source code from several projects. Our tool, trained on the past versions of a project, can effectively augment the syntax error locations produced by the native compiler. Thus we provide a methodology and tool that exploits the naturalness of software source code to detect syntax errors alongside the parser.", "title": "" } ]
scidocsrr
09460576dd0d579c743f5693c5f87efc
An inductive peaking technology for high-speed MIPI receiver bandwidth expanding in a 90 nm CMOS process
[ { "docid": "b02e195c19cc29b4f3b07ebf09e7855d", "text": "A CMOS Cherry-Hooper amplifier that is modified to include source follower feedback is described. A small signal model that uses only the most dominant capacitances is used to derive the transfer function of the circuit. The gain is significantly higher than that of the standard MOS Cherry-Hooper stage. Design techniques based on the analysis are suggested for broadband applications. A test circuit, fabricated in a 0.35 /spl mu/m CMOS technology, has 9.4dB gain and 880 MHz bandwidth while consuming 6.0 mA from a 3.3 V supply. Eye diagrams of the test chip at 630 MS/s show good eye opening, giving confidence to the new amplifier's large signal performance. In addition, a six stage main amplifier using the modified Cherry-Hooper stages was fabricated in a 0.18 /spl mu/m CMOS technology. It draws 44mA from a 1.8 V supply. It has 39 dB single-ended gain, 2.1 GHz bandwidth, and 14.2 dB noise figure.", "title": "" } ]
[ { "docid": "1e4292950f907d26b27fa79e1e8fa41f", "text": "All over the world every business and profit earning firm want to make their consumer loyal. There are many factors responsible for this customer loyalty but two of them are prominent. This research study is focused on that how customer satisfaction and customer retention contribute towards customer loyalty. For analysis part of this study, Universities students of Peshawar Region were targeted. A sample of 120 were selected from three universities of Peshawar. These universities were Preston University, Sarhad University and City University of Science and Information technology. Analysis was conducted with the help of SPSS 19. Results of the study shows that customer loyalty is more dependent upon Customer satisfaction in comparison of customer retention. Customer perceived value and customer perceived quality are the major factors which contribute for the customer loyalty of Universities students for mobile handsets.", "title": "" }, { "docid": "e05d92ac29261f1560e8d9775d39f6b4", "text": "The Architecture Engineering Construction Facilities Management (AEC/FM) industry is currently plagued with inefficiencies in access and retrieval of relevant information across the various stakeholders and actors, because the vast amount of project related information is not only diverse but the information is also highly fragmented and distributed across different sources and actors. More often than not, even if a good part of the project and task related information may be stored in the distributed information systems, the knowledge of where what is stored, and how that information can be accessed remains a tacit knowledge stored in the minds of the people involved in the project. Consequently, navigating through this distributed and fragmented information in the current practice is heavily reliant on the knowledge and experience of the individual actors in the network, who are able to guide each other to relevant information source, and in the process answering questions such as: who knows what? What information is where? Etc. Thus, to be able to access and effectively use the distributed knowledge and information held by different actors and information systems within a project, each actor needs to know the information access path, which in turn is mediated by other actors and their knowledge of the distribution of the information. In this type of distributed-knowledge network and “actor-focused thinking” when the key actor or actors leave the project, the access path to the relevant knowledge for the associated queries may also disappear, breaking the chain of queries. Therefore, we adopt an “information-focused thinking” where all project actors are considered and represented as computational and information storage entities in a knowledge network, building on the concepts and theories of Transactive Memory Systems (TMS), which primarily deal with effective management and usage of distributed knowledge sources. We further extend the explicit representation of the information entities to visual objects such that the actors can effectively understand, construct and recognize contextual relationships among the information entities through visual management and communication. The merits and challenges of such an approach towards visual transactive memory system for project information management are discussed using a prototype information management platform, VisuaLynk, developed around graph and linked-data concepts, and currently configured for the use phase of a project.", "title": "" }, { "docid": "39fa156732f88f7e3908f02d39670a9d", "text": "Along with the growth of the smartphone market, the furnishing of diverse and plentiful mobile Apps is surfacing as the key competitiveness of smartphones. The mobile App market has been appraised as a competitive new market carrying huge potential. Although many users download and use paid and free Apps from App Store and App Market, relevant research regarding consumer app buying is virtually non-existent. This study aims to examine the key determinants in deciding the purchase of Smartphone Apps. Because customers may have different consideration factors in deciding the purchase depending on the App type, we first classified Apps into 4 types: Productivity, Entertainment, Information, and Networking, With interviews with 30 App buyers, we identified the antecedents in the purchase of App in each type and compared them across the four types. This study has several implications for research and practice. Especially, the findings provide guidance to App developers and marketers in promoting the sales of App.", "title": "" }, { "docid": "44ffac24ef4d30a8104a2603bb1cdcb1", "text": "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them “Networks on Convolutional feature maps” (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.", "title": "" }, { "docid": "9546f8a74577cc1119e48fae0921d3cf", "text": "Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the length of the text. We propose a sequence-to-sequence, purely convolutional and deconvolutional autoencoding framework that is free of the above issue, while also being computationally efficient. The proposed method is simple, easy to implement and can be leveraged as a building block for many applications. We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data.", "title": "" }, { "docid": "28f9a2b2f6f4e90de20c6af78727b131", "text": "The detection and potential removal of duplicates is desirable for a number of reasons, such as to reduce the need for unnecessary storage and computation, and to provide users with uncluttered search results. This paper describes an investigation into the application of scalable simhash and shingle state of the art duplicate detection algorithms for detecting near duplicate documents in the CiteSeerX digital library. We empirically explored the duplicate detection methods and evaluated their performance and application to academic documents and identified good parameters for the algorithms. We also analyzed the types of near duplicates identified by each algorithm. The highest F-scores achieved were 0.91 and 0.99 for the simhash and shingle-based methods respectively. The shingle-based method also identified a larger variety of duplicate types than the simhash-based method.", "title": "" }, { "docid": "d4766ccd502b9c35ee83631fadc69aaf", "text": "The approach proposed by Śliwerski, Zimmermann, and Zeller (SZZ) for identifying bug-introducing changes is at the foundation of several research areas within the software engineering discipline. Despite the foundational role of SZZ, little effort has been made to evaluate its results. Such an evaluation is a challenging task because the ground truth is not readily available. By acknowledging such challenges, we propose a framework to evaluate the results of alternative SZZ implementations. The framework evaluates the following criteria: (1) the earliest bug appearance, (2) the future impact of changes, and (3) the realism of bug introduction. We use the proposed framework to evaluate five SZZ implementations using data from ten open source projects. We find that previously proposed improvements to SZZ tend to inflate the number of incorrectly identified bug-introducing changes. We also find that a single bug-introducing change may be blamed for introducing hundreds of future bugs. Furthermore, we find that SZZ implementations report that at least 46 percent of the bugs are caused by bug-introducing changes that are years apart from one another. Such results suggest that current SZZ implementations still lack mechanisms to accurately identify bug-introducing changes. Our proposed framework provides a systematic mean for evaluating the data that is generated by a given SZZ implementation.", "title": "" }, { "docid": "3ee772cb68d01c6080459820ee451657", "text": "We present a non-photorealistic rendering technique to transform color images and videos into painterly abstractions. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features, derived from the smoothed structure tensor. Contrary to conventional edge-preserving filters, our filter generates a painting-like flattening effect along the local feature directions while preserving shape boundaries. As opposed to conventional painting algorithms, it produces temporally coherent video abstraction without extra processing. The GPU implementation of our method processes video in real-time. The results have the clearness of cartoon illustrations but also exhibit directional information as found in oil paintings.", "title": "" }, { "docid": "c0b71e1120a65af5b71935bd4daa88fc", "text": "In a last few decades, development in power electronics systems has created its necessity in industrial and domestic applications like electric drives, UPS, solar and wind power conversion and many more. This paper presents the design, simulation, analysis and fabrication of a three phase, two-level inverter. The Space Vector Pulse Width Modulation (SVPWM) technique is used for the generation of gating signals for the three phase inverter. The proposed work is about real time embedded code generation technique that can be implemented using any microprocessor or microcontroller board of choice. The proposed technique reduces the analogue circuitry and eliminates the need of coding for generation of pulses, thereby making it simple and easy to implement. Control structure of SVPWM is simulated in MATLAB Simulink environment for analysis of different parameters of inverter. Comparative analysis of simulation results and hardware results is presented which shows that embedded code generation technique is very reliable and accurate.", "title": "" }, { "docid": "7ef2f4a771aa0d1724127c97aa21e1ea", "text": "This paper demonstrates the efficient use of Internet of Things for the traditional agriculture. It shows the use of Arduino and ESP8266 based monitored and controlled smart irrigation systems, which is also cost-effective and simple. It is beneficial for farmers to irrigate there land conveniently by the application of automatic irrigation system. This smart irrigation system has pH sensor, water flow sensor, temperature sensor and soil moisture sensor that measure respectively and based on these sensors arduino microcontroller drives the servo motor and pump. Arduino received the information and transmitted with ESP8266 Wi-Fi module wirelessly to the website through internet. This transmitted information is monitor and control by using IOT. This enables the remote control mechanism through a secure internet web connection to the user. A website has been prepared which present the actual time values and reference values of various factors needed by crops. Users can control water pumps and sprinklers through the website and keep an eye on the reference values which will help the farmer increase production with quality crops.", "title": "" }, { "docid": "dd1f8a5eae50d0a026387ba1b6695bef", "text": "Cloud computing is one of the significant development that utilizes progressive computational power and upgrades data distribution and data storing facilities. With cloud information services, it is essential for information to be saved in the cloud and also distributed across numerous customers. Cloud information repository is involved with issues of information integrity, data security and information access by unapproved users. Hence, an autonomous reviewing and auditing facility is necessary to guarantee that the information is effectively accommodated and used in the cloud. In this paper, a comprehensive survey on the state-of-art techniques in data auditing and security are discussed. Challenging problems in information repository auditing and security are presented. Finally, directions for future research in data auditing and security have been discussed.", "title": "" }, { "docid": "8b641a8f504b550e1eed0dca54bfbe04", "text": "Overlay architectures are programmable logic systems that are compiled on top of a traditional FPGA. These architectures give designers flexibility, and have a number of benefits, such as being designed or optimized for specific application domains, making it easier or more efficient to implement solutions, being independent of platform, allowing the ability to do partial reconfiguration regardless of the underlying architecture, and allowing compilation without using vendor tools, in some cases with fully open source tool chains. This thesis describes the implementation of two FPGA overlay architectures, ZUMA and CARBON. These overlay implementations include optimizations to reduce area and increase speed which may be applicable to many other FPGAs and also ASIC systems. ZUMA is a fine-grain overlay which resembles a modern commercial FPGA, and is compatible with the VTR open source compilation tools. The implementation includes a number of novel features tailored to efficient FPGA implementation, including the utilization of reprogrammable LUTRAMs, a novel two-stage local routing crossbar, and an area efficient configuration controller. CARBON", "title": "" }, { "docid": "f154fb6af73bc0673d208716f8b77d72", "text": "Deep autoencoder networks have successfully been applied in unsupervised dimension reduction. The autoencoder has a \"bottleneck\" middle layer of only a few hidden units, which gives a low dimensional representation for the data when the full network is trained to minimize reconstruction error. We propose using a deep bottlenecked neural network in supervised dimension reduction. Instead of trying to reproduce the data, the network is trained to perform classification. Pretraining with restricted Boltzmann machines is combined with supervised finetuning. Finetuning with supervised cost functions has been done, but with cost functions that scale quadratically. Training a bottleneck classifier scales linearly, but still gives results comparable to or sometimes better than two earlier supervised methods.", "title": "" }, { "docid": "abb06d560266ca1695f72e4d908cf6ea", "text": "A simple photovoltaic (PV) system capable of operating in grid-connected mode and using multilevel boost converter (MBC) and line commutated inverter (LCI) has been developed for extracting the maximum power and feeding it to a single phase utility grid with harmonic reduction. Theoretical analysis of the proposed system is done and the duty ratio of the MBC is estimated for extracting maximum power from PV array. For a fixed firing angle of LCI, the proposed system is able to track the maximum power with the determined duty ratio which remains the same for all irradiations. This is the major advantage of the proposed system which eliminates the use of a separate maximum power point tracking (MPPT) Experiments have been conducted for feeding a single phase voltage to the grid. So by proper and simplified technique we are reducing the harmonics in the grid for unbalanced loads.", "title": "" }, { "docid": "5e09b2302bc3dc9ca6ae8f4a3812ec1d", "text": "Learning to Reconstruct 3D Objects", "title": "" }, { "docid": "7228073bef61131c2efcdc736d90ca1b", "text": "With the advent of word representations, word similarity tasks are becoming increasing popular as an evaluation metric for the quality of the representations. In this paper, we present manually annotated monolingual word similarity datasets of six Indian languages – Urdu, Telugu, Marathi, Punjabi, Tamil and Gujarati. These languages are most spoken Indian languages worldwide after Hindi and Bengali. For the construction of these datasets, our approach relies on translation and re-annotation of word similarity datasets of English. We also present baseline scores for word representation models using state-of-the-art techniques for Urdu, Telugu and Marathi by evaluating them on newly created word similarity datasets.", "title": "" }, { "docid": "534996baa60c92a5aa4b25725cd5987e", "text": "Whether neural networks can learn abstract reasoning or whether they merely rely on superficial statistics is a topic of recent debate. Here, we propose a dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test. To succeed at this challenge, models must cope with various generalisation ‘regimes’ in which the training and test data differ in clearlydefined ways. We show that popular models such as ResNets perform poorly, even when the training and test sets differ only minimally, and we present a novel architecture, with a structure designed to encourage reasoning, that does significantly better. When we vary the way in which the test questions and training data differ, we find that our model is notably proficient at certain forms of generalisation, but notably weak at others. We further show that the model’s ability to generalise improves markedly if it is trained to predict symbolic explanations for its answers. Altogether, we introduce and explore ways to both measure and induce stronger abstract reasoning in neural networks. Our freely-available dataset should motivate further progress in this direction.", "title": "" }, { "docid": "b008f4477ec7bdb80bc88290a57e5883", "text": "Artificial Neural networks purport to be biomimetic, but are by definition acyclic computational graphs. As a corollary, neurons in artificial nets fire only once and have no time-dynamics. Both these properties contrast with what neuroscience has taught us about human brain connectivity, especially with regards to object recognition. We therefore propose a way to simulate feedback loops in the brain by unrolling loopy neural networks several timesteps, and investigate the properties of these networks. We compare different variants of loops, including multiplicative composition of inputs and additive composition of inputs. We demonstrate that loopy networks outperform deep feedforward networks with the same number of parameters on the CIFAR-10 dataset, as well as nonloopy versions of the same network, and perform equally well on the MNIST dataset. In order to further understand our models, we visualize neurons in loop layers with guided backprop, demonstrating that the same filters behave increasingly nonlinearly at higher unrolling levels. Furthermore, we interpret loops as attention mechanisms, and demonstrate that the composition of the loop output with the input image produces images that look qualitatively like attention maps.", "title": "" }, { "docid": "4520a0c8bdd2c0c55e181ec4bfe80d35", "text": "The authors present a case which brings out a unique modality of child homicide by placing the baby in a washing machine and turning it on. The murder was perpetrated by the baby’s mother, who suffered from a serious depressive disorder. A postmortem RX and then a forensic autopsy were performed, followed by histologic examinations and toxicology. On the basis of the results of the autopsy, as well as the histology and the negative toxicological data, the cause of death was identified as acute asphyxia. This diagnosis was rendered in light of the absence of other causes of death, as well as the presence of typical signs of asphyxia, such as epicardial and pleural petechiae and, above all, the microscopic examinations, which pointed out a massive acute pulmonary emphysema. Regarding the cause of the asphyxia, at least two mechanisms can be identified: drowning and smothering. In addition, the histology of the brain revealed some findings that can be regarded as a consequence of the barotrauma due to the centrifugal force applied by the rotating drum of the washing machine. Another remarkable aspect is that we are dealing with a mentally-ill assailant. In fact, the baby’s mother, after a psychiatric examination, was confirmed to be suffering from a mental illness—a severe depressive disorder—and so she was adjudicated not-guilty-by-reason-of-insanity. This case warrants attention because of its uniqueness and complexity and, above all, its usefulness in the understanding of the pathophysiology of this particular manner of death.", "title": "" }, { "docid": "9f1441bc10d7b0234a3736ce83d5c14b", "text": "Conservation of genetic diversity, one of the three main forms of biodiversity, is a fundamental concern in conservation biology as it provides the raw material for evolutionary change and thus the potential to adapt to changing environments. By means of meta-analyses, we tested the generality of the hypotheses that habitat fragmentation affects genetic diversity of plant populations and that certain life history and ecological traits of plants can determine differential susceptibility to genetic erosion in fragmented habitats. Additionally, we assessed whether certain methodological approaches used by authors influence the ability to detect fragmentation effects on plant genetic diversity. We found overall large and negative effects of fragmentation on genetic diversity and outcrossing rates but no effects on inbreeding coefficients. Significant increases in inbreeding coefficient in fragmented habitats were only observed in studies analyzing progenies. The mating system and the rarity status of plants explained the highest proportion of variation in the effect sizes among species. The age of the fragment was also decisive in explaining variability among effect sizes: the larger the number of generations elapsed in fragmentation conditions, the larger the negative magnitude of effect sizes on heterozygosity. Our results also suggest that fragmentation is shifting mating patterns towards increased selfing. We conclude that current conservation efforts in fragmented habitats should be focused on common or recently rare species and mainly outcrossing species and outline important issues that need to be addressed in future research on this area.", "title": "" } ]
scidocsrr
d93f93049619c519e11f4b4601712615
Gamifying Information Systems - a synthesis of Gamification mechanics and Dynamics
[ { "docid": "4f6a6f633e512a33fc0b396765adcdf0", "text": "Interactive systems often require calibration to ensure that input and output are optimally configured. Without calibration, user performance can degrade (e.g., if an input device is not adjusted for the user's abilities), errors can increase (e.g., if color spaces are not matched), and some interactions may not be possible (e.g., use of an eye tracker). The value of calibration is often lost, however, because many calibration processes are tedious and unenjoyable, and many users avoid them altogether. To address this problem, we propose calibration games that gather calibration data in an engaging and entertaining manner. To facilitate the creation of calibration games, we present design guidelines that map common types of calibration to core tasks, and then to well-known game mechanics. To evaluate the approach, we developed three calibration games and compared them to standard procedures. Users found the game versions significantly more enjoyable than regular calibration procedures, without compromising the quality of the data. Calibration games are a novel way to motivate users to carry out calibrations, thereby improving the performance and accuracy of many human-computer systems.", "title": "" }, { "docid": "78e21364224b9aa95f86ac31e38916ef", "text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "bf5f08174c55ed69e454a87ff7fbe6e2", "text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "f9cddbf2b0df51aeaf240240bd324b33", "text": "Grammatical agreement means that features associated with one linguistic unit (for example number or gender) become associated with another unit and then possibly overtly expressed, typically with morphological markers. It is one of the key mechanisms used in many languages to show that certain linguistic units within an utterance grammatically depend on each other. Agreement systems are puzzling because they can be highly complex in terms of what features they use and how they are expressed. Moreover, agreement systems have undergone considerable change in the historical evolution of languages. This article presents language game models with populations of agents in order to find out for what reasons and by what cultural processes and cognitive strategies agreement systems arise. It demonstrates that agreement systems are motivated by the need to minimize combinatorial search and semantic ambiguity, and it shows, for the first time, that once a population of agents adopts a strategy to invent, acquire and coordinate meaningful markers through social learning, linguistic self-organization leads to the spontaneous emergence and cultural transmission of an agreement system. The article also demonstrates how attested grammaticalization phenomena, such as phonetic reduction and conventionalized use of agreement markers, happens as a side effect of additional economizing principles, in particular minimization of articulatory effort and reduction of the marker inventory. More generally, the article illustrates a novel approach for studying how key features of human languages might emerge.", "title": "" }, { "docid": "143da39941ecc8fb69e87d611503b9c0", "text": "A dual-core 64b Xeonreg MP processor is implemented in a 65nm 8M process. The 435mm2 die has 1.328B transistors. Each core has two threads and a unified 1MB L2 cache. The 16MB unified, 16-way set-associative L3 cache implements both sleep and shut-off leakage reduction modes", "title": "" }, { "docid": "e5f38cb3857c5101111c69d7318ebcbc", "text": "Rotator cuff tendinitis is one of the main causes of shoulder pain. The objective of this study was to evaluate the possible additive effects of low-power laser treatment in combination with conventional physiotherapy endeavors in these patients. A total of 50 patients who were referred to the Physical Medicine and Rehabilitation Clinic with shoulder pain and rotator cuff disorders were selected. Pain severity measured with visual analogue scale (VAS), abduction, and external rotation range of motion in shoulder joint was measured by goniometry, and evaluation of daily functional abilities of patients was measured by shoulder disability questionnaire. Twenty-five of the above patients were randomly assigned into the control group and received only routine physiotherapy. The other 25 patients were assigned into the experimental group and received conventional therapy plus low-level laser therapy (4 J/cm2 at each point over a maximum of ten painful points of shoulder region for total 5 min duration). The above measurements were assessed at the end of the third week of therapy in each group and the results were analyzed statistically. In both groups, statistically significant improvement was detected in all outcome measures compared to baseline (p < 0.05). Comparison between two different groups revealed better results for control of pain (reduction in VAS average) and shoulder disability problems in the experimental group versus the control (3.1 ± 2.2 vs. 5 ± 2.6, p = 0.029 and 4.4 ± 3.1 vs. 8.5 ± 5.1, p = 0.031, respectively ) after intervention. Positive objective signs also had better results in the experimental group, but the mean range of active abduction (144.92 ± 31.6 vs. 132.80 ± 31.3) and external rotation (78.0 ± 19.5 vs. 76.3 ± 19.1) had no significant difference between the two groups (p = 0.20 and 0.77, respectively). As one of physical modalities, gallium-arsenide low-power laser combined with conventional physiotherapy has superiority over routine physiotherapy from the view of decreasing pain and improving the patient’s function, but no additional advantages were detected in increasing shoulder joint range of motion in comparison to other physical agents.", "title": "" }, { "docid": "e1bd202db576085b70f0494d29791a5b", "text": "Object class labelling is the task of annotating images with labels on the presence or absence of objects from a given class vocabulary. Simply asking one yes-no question per class, however, has a cost that is linear in the vocabulary size and is thus inefficient for large vocabularies. Modern approaches rely on a hierarchical organization of the vocabulary to reduce annotation time, but remain expensive (several minutes per image for the 200 classes in ILSVRC). Instead, we propose a new interface where classes are annotated via speech. Speaking is fast and allows for direct access to the class name, without searching through a list or hierarchy. As additional advantages, annotators can simultaneously speak and scan the image for objects, the interface can be kept extremely simple, and using it requires less mouse movement. However, a key challenge is to train annotators to only say words from the given class vocabulary. We present a way to tackle this challenge and show that our method yields high-quality annotations at significant speed gains (2.3− 14.9× faster than existing methods).", "title": "" }, { "docid": "0485beab9d781e99046042a15ea913c5", "text": "Systems for processing continuous monitoring queries over data streams must be adaptive because data streams are often bursty and data characteristics may vary over time. We focus on one particular type of adaptivity: the ability to gracefully degrade performance via \"load shedding\" (dropping unprocessed tuples to reduce system load) when the demands placed on the system cannot be met in full given available resources. Focusing on aggregation queries, we present algorithms that determine at what points in a query plan should load shedding be performed and what amount of load should be shed at each point in order to minimize the degree of inaccuracy introduced into query answers. We report the results of experiments that validate our analytical conclusions.", "title": "" }, { "docid": "9e208e6beed62575a92f32031b7af8ad", "text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.", "title": "" }, { "docid": "84646992c6de3b655f8ccd2bda3e6d4c", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.eswa.2012.02.064 ⇑ Corresponding author. E-mail addresses: raffaele.cappelli@unibo.it (R. C bo.it (M. Ferrara). This paper proposes a novel fingerprint retrieval system that combines level-1 (local orientation and frequencies) and level-2 (minutiae) features. Various scoreand rank-level fusion strategies and a novel hybrid fusion approach are evaluated. Extensive experiments are carried out on six public databases and a systematic comparison is made with eighteen retrieval methods and seventeen exclusive classification techniques published in the literature. The novel approach achieves impressive results: its retrieval accuracy is definitely higher than competing state-of-the-art methods, with error rates that in some cases are even one or two orders of magnitude smaller. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4de971edc8e677d554ae77f6976fc5d3", "text": "With the widespread use of encrypted data transport network traffic encryption is becoming a standard nowadays. This presents a challenge for traffic measurement, especially for analysis and anomaly detection methods which are dependent on the type of network traffic. In this paper, we survey existing approaches for classification and analysis of encrypted traffic. First, we describe the most widespread encryption protocols used throughout the Internet. We show that the initiation of an encrypted connection and the protocol structure give away a lot of information for encrypted traffic classification and analysis. Then, we survey payload and feature-based classification methods for encrypted traffic and categorize them using an established taxonomy. The advantage of some of described classification methods is the ability to recognize the encrypted application protocol in addition to the encryption protocol. Finally, we make a comprehensive comparison of the surveyed feature-based classification methods and present their weaknesses and strengths. Copyright c © 2014 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "83f067159913e65410a054681461ab4d", "text": "Cloud computing has revolutionized the way computing and software services are delivered to the clients on demand. It offers users the ability to connect to computing resources and access IT managed services with a previously unknown level of ease. Due to this greater level of flexibility, the cloud has become the breeding ground of a new generation of products and services. However, the flexibility of cloud-based services comes with the risk of the security and privacy of users' data. Thus, security concerns among users of the cloud have become a major barrier to the widespread growth of cloud computing. One of the security concerns of cloud is data mining based privacy attacks that involve analyzing data over a long period to extract valuable information. In particular, in current cloud architecture a client entrusts a single cloud provider with his data. It gives the provider and outside attackers having unauthorized access to cloud, an opportunity of analyzing client data over a long period to extract sensitive information that causes privacy violation of clients. This is a big concern for many clients of cloud. In this paper, we first identify the data mining based privacy risks on cloud data and propose a distributed architecture to eliminate the risks.", "title": "" }, { "docid": "804ddcaf56ef34b0b578cc53d7cca304", "text": "This review article describes two protocols adapted from lung ultrasound: the bedside lung ultrasound in emergency (BLUE)-protocol for the immediate diagnosis of acute respiratory failure and the fluid administration limited by lung sonography (FALLS)-protocol for the management of acute circulatory failure. These applications require the mastery of 10 signs indicating normal lung surface (bat sign, lung sliding, A-lines), pleural effusions (quad and sinusoid sign), lung consolidations (fractal and tissue-like sign), interstitial syndrome (lung rockets), and pneumothorax (stratosphere sign and the lung point). These signs have been assessed in adults, with diagnostic accuracies ranging from 90% to 100%, allowing consideration of ultrasound as a reasonable bedside gold standard. In the BLUE-protocol, profiles have been designed for the main diseases (pneumonia, congestive heart failure, COPD, asthma, pulmonary embolism, pneumothorax), with an accuracy > 90%. In the FALLS-protocol, the change from A-lines to lung rockets appears at a threshold of 18 mm Hg of pulmonary artery occlusion pressure, providing a direct biomarker of clinical volemia. The FALLS-protocol sequentially rules out obstructive, then cardiogenic, then hypovolemic shock for expediting the diagnosis of distributive (usually septic) shock. These applications can be done using simple grayscale machines and one microconvex probe suitable for the whole body. Lung ultrasound is a multifaceted tool also useful for decreasing radiation doses (of interest in neonates where the lung signatures are similar to those in adults), from ARDS to trauma management, and from ICUs to points of care. If done in suitable centers, training is the least of the limitations for making use of this kind of visual medicine.", "title": "" }, { "docid": "733ddc5a642327364c2bccb6b1258fac", "text": "Human memory is unquestionably a vital cognitive ability but one that can often be unreliable. External memory aids such as diaries, photos, alarms and calendars are often employed to assist in remembering important events in our past and future. The recent trend for lifelogging, continuously documenting ones life through wearable sensors and cameras, presents a clear opportunity to augment human memory beyond simple reminders and actually improve its capacity to remember. This article surveys work from the fields of computer science and psychology to understand the potential for such augmentation, the technologies necessary for realising this opportunity and to investigate what the possible benefits and ethical pitfalls of using such technology might be.", "title": "" }, { "docid": "f9bd86958566868d2da17aad9c5029df", "text": "A Multi-Agent System (MAS) is an organization of coordinated autonomous agents that interact in order to achieve common goals. Considering real world organizations as an analogy, this paper proposes architectural styles for MAS which adopt concepts from organization theory and strategic alliances literature. The styles are intended to represent a macro-level architecture of a MAS, and they are modeled using the i* framework which offers the notions of actor, goal and actor dependency for modeling multi-agent settings. The styles are also specified as metaconcepts in the Telos modeling language. Moreover, each style is evaluated with respect to a set of software quality attributes, such as predictability and adaptability. The paper also explores the adoption of micro-level patterns proposed elsewhere in order to give a finer-grain description of a MAS architecture. These patterns define how goals assigned to actors participating in an organizational architecture will be fulfilled by agents. An e-business example illustrates both the styles and patterns proposed in this work. The research is being conducted within the context of Tropos, a comprehensive software development methodology for agent-oriented software.", "title": "" }, { "docid": "b181559966c55d90741f62e645b7d2f7", "text": "BACKGROUND AND AIMS\nPsychological stress is associated with inflammatory bowel disease [IBD], but the nature of this relationship is complex. At present, there is no simple tool to screen for stress in IBD clinical practice or assess stress repeatedly in longitudinal studies. Our aim was to design a single-question 'stressometer' to rapidly measure stress and validate this in IBD patients.\n\n\nMETHODS\nIn all, 304 IBD patients completed a single-question 'stressometer'. This was correlated with stress as measured by the Depression Anxiety Stress Scales [DASS-21], quality of life, and disease activity. Test-retest reliability was assessed in 31 patients who completed the stressometer and the DASS-21 on two occasions 4 weeks apart.\n\n\nRESULTS\nStressometer levels correlated with the DASS-21 stress dimension in both Crohn's disease [CD] (Spearman's rank correlation coefficient [rs] 0.54; p < 0.001) and ulcerative colitis [UC] [rs 0.59; p < 0.001]. Stressometer levels were less closely associated with depression and anxiety [rs range 0.36 to 0.49; all p-values < 0.001]. Stressometer scores correlated with all four Short Health Scale quality of life dimensions in both CD and UC [rs range 0.35 to 0.48; all p-values < 0.001] and with disease activity in Crohn's disease [rs 0.46; p < 0.001] and ulcerative colitis [rs 0.20; p = 0.02]. Responsiveness was confirmed with a test-retest correlation of 0.43 [p = 0.02].\n\n\nCONCLUSIONS\nThe stressometer is a simple, valid, and responsive measure of psychological stress in IBD patients and may be a useful patient-reported outcome measure in future IBD clinical and research assessments.", "title": "" }, { "docid": "f3b9269e3d6e6098384eda277129864c", "text": "Action planning using learned and differentiable forward models of the world is a general approach which has a number of desirable properties, including improved sample complexity over modelfree RL methods, reuse of learned models across different tasks, and the ability to perform efficient gradient-based optimization in continuous action spaces. However, this approach does not apply straightforwardly when the action space is discrete. In this work, we show that it is in fact possible to effectively perform planning via backprop in discrete action spaces, using a simple paramaterization of the actions vectors on the simplex combined with input noise when training the forward model. Our experiments show that this approach can match or outperform model-free RL and discrete planning methods on gridworld navigation tasks in terms of performance and/or planning time while using limited environment interactions, and can additionally be used to perform model-based control in a challenging new task where the action space combines discrete and continuous actions. We furthermore propose a policy distillation approach which yields a fast policy network which can be used at inference time, removing the need for an iterative planning procedure.", "title": "" }, { "docid": "46e8609b7cf5cfc970aa75fa54d3551d", "text": "BACKGROUND\nAims were to assess the efficacy of metacognitive training (MCT) in people with a recent onset of psychosis in terms of symptoms as a primary outcome and metacognitive variables as a secondary outcome.\n\n\nMETHOD\nA multicenter, randomized, controlled clinical trial was performed. A total of 126 patients were randomized to an MCT or a psycho-educational intervention with cognitive-behavioral elements. The sample was composed of people with a recent onset of psychosis, recruited from nine public centers in Spain. The treatment consisted of eight weekly sessions for both groups. Patients were assessed at three time-points: baseline, post-treatment, and at 6 months follow-up. The evaluator was blinded to the condition of the patient. Symptoms were assessed with the PANSS and metacognition was assessed with a battery of questionnaires of cognitive biases and social cognition.\n\n\nRESULTS\nBoth MCT and psycho-educational groups had improved symptoms post-treatment and at follow-up, with greater improvements in the MCT group. The MCT group was superior to the psycho-educational group on the Beck Cognitive Insight Scale (BCIS) total (p = 0.026) and self-certainty (p = 0.035) and dependence self-subscale of irrational beliefs, comparing baseline and post-treatment. Moreover, comparing baseline and follow-up, the MCT group was better than the psycho-educational group in self-reflectiveness on the BCIS (p = 0.047), total BCIS (p = 0.045), and intolerance to frustration (p = 0.014). Jumping to Conclusions (JTC) improved more in the MCT group than the psycho-educational group (p = 0.021). Regarding the comparison within each group, Theory of Mind (ToM), Personalizing Bias, and other subscales of irrational beliefs improved in the MCT group but not the psycho-educational group (p < 0.001-0.032).\n\n\nCONCLUSIONS\nMCT could be an effective psychological intervention for people with recent onset of psychosis in order to improve cognitive insight, JTC, and tolerance to frustration. It seems that MCT could be useful to improve symptoms, ToM, and personalizing bias.", "title": "" }, { "docid": "31c2dc8045f43c7bf1aa045e0eb3b9ad", "text": "This paper addresses the task of functional annotation of genes from biomedical literature. We view this task as a hierarchical text categorization problem with Gene Ontology as a class hierarchy. We present a novel global hierarchical learning approach that takes into account the semantics of a class hierarchy. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not consider any hierarchical information. In addition, we propose a novel hierarchical evaluation measure that gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy.", "title": "" }, { "docid": "d6d8ef59feb54c76fdcc43b31b9bf5f8", "text": "We consider the classical TD(0) algorithm implemented on a network of agents wherein the agents also incorporate updates received from neighboring agents using a gossip-like mechanism. The combined scheme is shown to converge for both discounted and average cost problems.", "title": "" }, { "docid": "38a18bfce2cb33b390dd7c7cf5a4afd1", "text": "Automatic photo assessment is a high emerging research field with wide useful ‘real-world’ applications. Due to the recent advances in deep learning, one can observe very promising approaches in the last years. However, the proposed solutions are adapted and optimized for ‘isolated’ datasets making it hard to understand the relationship between them and to benefit from the complementary information. Following a unifying approach, we propose in this paper a learning model that integrates the knowledge from different datasets. We conduct a study based on three representative benchmark datasets for photo assessment. Instead of developing for each dataset a specific model, we design and adapt sequentially a unique model which we nominate UNNA. UNNA consists of a deep convolutional neural network, that predicts for a given image three kinds of aesthetic information: technical quality, high-level semantical quality, and a detailed description of photographic rules. Due to the sequential adaptation that exploits the common features between the chosen datasets, UNNA has comparable performances with the state-of-the-art solutions with effectively less parameter. The final architecture of UNNA gives us some interesting indication of the kind of shared features as well as individual aspects of the considered datasets.", "title": "" }, { "docid": "91ed0637e0533801be8b03d5ad21d586", "text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.", "title": "" } ]
scidocsrr
ecbea5f976b36a7d6e9cec541b9c6879
A Self-Service Supporting Business Intelligence and Big Data Analytics Architecture
[ { "docid": "a44b74738723580f4056310d6856bb74", "text": "This book covers the theory and principles of core avionic systems in civil and military aircraft, including displays, data entry and control systems, fly by wire control systems, inertial sensor and air data systems, navigation, autopilot systems an... Use the latest data mining best practices to enable timely, actionable, evidence-based decision making throughout your organization! Real-World Data Mining demystifies current best practices, showing how to use data mining to uncover hidden patterns ... Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Ex... This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in data base systems and new data base applications and is also designed to give a broad, yet ....", "title": "" } ]
[ { "docid": "88ea3f043b43a11a0a7d79e59a774c1f", "text": "The purpose of this paper is to present an alternative systems thinking–based perspective and approach to the requirements elicitation process in complex situations. Three broad challenges associated with the requirements engineering elicitation in complex situations are explored, including the (1) role of the system observer, (2) nature of system requirements in complex situations, and (3) influence of the system environment. Authors have asserted that the expectation of unambiguous, consistent, complete, understandable, verifiable, traceable, and modifiable requirements is not consistent with complex situations. In contrast, complex situations are an emerging design reality for requirements engineering processes, marked by high levels of ambiguity, uncertainty, and emergence. This paper develops the argument that dealing with requirements for complex situations requires a change in paradigm. The elicitation of requirements for simple and technically driven systems is appropriately accomplished by proven methods. In contrast, the elicitation of requirements in complex situations (e.g., integrated multiple critical infrastructures, system-of-systems, etc.) requires more holistic thinking and can be enhanced by grounding in systems theory.", "title": "" }, { "docid": "2f471c24ccb38e70627eba6383c003e0", "text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.", "title": "" }, { "docid": "12f717b4973a5290233d6f03ba05626b", "text": "We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.", "title": "" }, { "docid": "0e796ac2c27a1811eaafb8e3a65c7d59", "text": "When dealing with large graphs, such as those that arise in the context of online social networks, a subset of nodes may be labeled. These labels can indicate demographic values, interest, beliefs or other characteristics of the nodes (users). A core problem is to use this information to extend the labeling so that all nodes are assigned a label (or labels). In this chapter, we survey classification techniques that have been proposed for this problem. We consider two broad categories: methods based on iterative application of traditional classifiers using graph information as features, and methods which propagate the existing labels via random walks. We adopt a common perspective on these methods to highlight the similarities between different approaches within and across the two categories. We also describe some extensions and related directions to the central problem of node classification.", "title": "" }, { "docid": "10202f2c14808988ca74b7efe5079949", "text": "Multiagent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, and economics. The complexity of many tasks arising in these domains makes them difficult to solve with preprogrammed agent behaviors. The agents must, instead, discover a solution on their own, using learning. A significant part of the research on multiagent learning concerns reinforcement learning techniques. This paper provides a comprehensive survey of multiagent reinforcement learning (MARL). A central issue in the field is the formal statement of the multiagent learning goal. Different viewpoints on this issue have led to the proposal of many different goals, among which two focal points can be distinguished: stability of the agents' learning dynamics, and adaptation to the changing behavior of the other agents. The MARL algorithms described in the literature aim---either explicitly or implicitly---at one of these two goals or at a combination of both, in a fully cooperative, fully competitive, or more general setting. A representative selection of these algorithms is discussed in detail in this paper, together with the specific issues that arise in each category. Additionally, the benefits and challenges of MARL are described along with some of the problem domains where the MARL techniques have been applied. Finally, an outlook for the field is provided.", "title": "" }, { "docid": "ab3d4c0562847c6a4ebfe4ab398d8e74", "text": "Self-compassion refers to a kind and nurturing attitude toward oneself during situations that threaten one’s adequacy, while recognizing that being imperfect is part of being human. Although growing evidence indicates that selfcompassion is related to a wide range of desirable psychological outcomes, little research has explored self-compassion in older adults. The present study investigated the relationships between self-compassion and theoretically based indicators of psychological adjustment, as well as the moderating effect of self-compassion on self-rated health. A sample of 121 older adults recruited from a community library and a senior day center completed self-report measures of self-compassion, self-esteem, psychological well-being, anxiety, and depression. Results indicated that self-compassion is positively correlated with age, self-compassion is positively and uniquely related to psychological well-being, and self-compassion moderates the association between self-rated health and depression. These results suggest that interventions designed to increase self-compassion in older adults may be a fruitful direction for future applied research.", "title": "" }, { "docid": "8af61009253af61dd6d4daf0ad4be30c", "text": "Forensic anthropologists often rely on the state of decomposition to estimate the postmortem interval (PMI) in a human remains case. The state of decomposition can provide much information about the PMI, especially when decomposition is treated as a semi-continuous variable and used in conjunction with accumulated-degree-days (ADD). This preliminary study demonstrates a supplemental method of determining the PMI based on scoring decomposition using a point-based system and taking into account temperatures in which the remains were exposed. This project was designed to examine the ways that forensic anthropologists could improve their PMI estimates based on decomposition by using a more quantitative approach. A total of 68 human remains cases with a known date of death were scored for decomposition and a regression equation was calculated to predict ADD from decomposition score. ADD accounts for approximately 80% of the variation in decomposition. This study indicates that decomposition is best modeled as dependent on accumulated temperature, not just time.", "title": "" }, { "docid": "083cb6546aecdc12c2a1e36a9b8d9b67", "text": "Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semisupervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.1", "title": "" }, { "docid": "c8cd0c0ebd38b3e287d6e6eed965db6b", "text": "Goalball, one of the official Paralympic events, is popular with visually impaired people all over the world. The purpose of goalball is to throw the specialized ball, with bells inside it, to the goal line of the opponents as many times as possible while defenders try to block the thrown ball with their bodies. Since goalball players cannot rely on visual information, they need to grasp the game situation using their auditory sense. However, it is hard, especially for beginners, to perceive the direction and distance of the thrown ball. In addition, they generally tend to be afraid of the approaching ball because, without visual information, they could be hit by a high-speed ball. In this paper, our goal is to develop an application called GoalBaural (Goalball + aural) that enables goalball players to improve the recognizability of the direction and distance of a thrown ball without going onto the court and playing goalball. The evaluation result indicated that our application would be efficient in improving the speed and the accuracy of locating the balls.", "title": "" }, { "docid": "a88d96ab8202d7328b97f68902d0a41b", "text": "How the motor-related cortical areas modulate the activity of the output nuclei of the basal ganglia is an important issue for understanding the mechanisms of motor control by the basal ganglia. The cortico-subthalamo-pallidal 'hyperdirect' pathway conveys powerful excitatory effects from the motor-related cortical areas to the globus pallidus, bypassing the striatum, with shorter conduction time than effects conveyed through the striatum. We emphasize the functional significance of the 'hyperdirect' pathway and propose a dynamic 'center-surround model' of basal ganglia function in the control of voluntary limb movements. When a voluntary movement is about to be initiated by cortical mechanisms, a corollary signal conveyed through the cortico-subthalamo-pallidal 'hyperdirect' pathway first inhibits large areas of the thalamus and cerebral cortex that are related to both the selected motor program and other competing programs. Then, another corollary signal through the cortico-striato-pallidal 'direct' pathway disinhibits their targets and releases only the selected motor program. Finally, the third corollary signal possibly through the cortico-striato-external pallido-subthalamo-internal pallidal 'indirect' pathway inhibits their targets extensively. Through this sequential information processing, only the selected motor program is initiated, executed and terminated at the selected timing, whereas other competing programs are canceled.", "title": "" }, { "docid": "e5673ab37cb9095946d96399aa340bcc", "text": "Water reclamation and reuse provides a unique and viable opportunity to augment traditional water supplies. As a multi-disciplined and important element of water resources development and management, water reuse can help to close the loop between water supply and wastewater disposal. Effective water reuse requires integration of water and reclaimed water supply functions. The successful development of this dependable water resource depends upon close examination and synthesis of elements from infrastructure and facilities planning, wastewater treatment plant siting, treatment process reliability, economic and financial analyses, and water utility management. In this paper, fundamental concepts of water reuse are discussed including definitions, historical developments, the role of water recycling in the hydrologic cycle, categories of water reuse, water quality criteria and regulatory requirements, and technological innovations for the safe use of reclaimed water. The paper emphasizes the integration of this alternative water supply into water resources planning, and the emergence of modern water reclamation and reuse practices from wastewater to reclaimed water to repurified water.", "title": "" }, { "docid": "4de2536d5c56d6ade1b3eff97ac8037a", "text": "Received November 25, 1992; revised manuscript received April 14, 1993; accepted May 11, 1993 We develop a maximum-likelihood (ML) algorithm for estimation and correction (autofocus) of phase errors induced in synthetic-aperture-radar (SAR) imagery. Here, M pulse vectors in the range-compressed domain are used as input for simultaneously estimating M 1 phase values across the aperture. The solution involves an eigenvector of the sample covariance matrix of the range-compressed data. The estimator is then used within the basic structure of the phase gradient autofocus (PGA) algorithm, replacing the original phase-estimation kernel. We show that, in practice, the new algorithm provides excellent restorations to defocused SAR imagery, typically in only one or two iterations. The performance of the new phase estimator is demonstrated essentially to achieve the Cramer-Rao lower bound on estimation-error variance for all but small values of target-toclutter ratio. We also show that for the case in which M is equal to 2, the ML estimator is similar to that of the original PGA method but achieves better results in practice, owing to a bias inherent in the original PGA phaseestimation kernel. Finally, we discuss the relationship of these algorithms to the shear-averaging and spatialcorrelation methods, two other phase-correction techniques that utilize the same phase-estimation kernel but that produce substantially poorer performance because they do not employ several fundamental signal-processing steps that are critical to the algorithms of the PGA class.", "title": "" }, { "docid": "435fcf5dab986fd87db6fc24fef3cc1a", "text": "Web applications make life more convenient through on the activities. Many web applications have several kind of user input (e.g. personal information, a user's comment of commercial goods, etc.) for the activities. However, there are various vulnerabilities in input functions of web applications. It is possible to try malicious actions using free accessibility of the web applications. The attacks by exploitation of these input vulnerabilities enable to be performed by injecting malicious web code; it enables one to perform various illegal actions, such as SQL Injection Attacks (SQLIAs) and Cross Site Scripting (XSS). These actions come down to theft, replacing personal information, or phishing. Many solutions have devised for the malicious web code, such as AMNESIA [1] and SQL Check [2], etc. The methods use parser for the code, and limited to fixed and very small patterns, and are difficult to adapt to variations. Machine learning method can give leverage to cover far broader range of malicious web code and is easy to adapt to variations and changes. Therefore, we suggests adaptable classification of malicious web code by machine learning approach such as Support Vector Machine (SVM)[3], Naïve-Bayes[4], and k-Nearest Neighbor Algorithm[5] for detecting the exploitation user inputs.", "title": "" }, { "docid": "102a9eb7ba9f65a52c6983d74120430e", "text": "A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.", "title": "" }, { "docid": "9420760d6945440048cee3566ce96699", "text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.", "title": "" }, { "docid": "342d074c84d55b60a617d31026fe23e1", "text": "Fractured bones heal by a cascade of cellular events in which mesenchymal cells respond to unknown regulators by proliferating, differentiating, and synthesizing extracellular matrix. Current concepts suggest that growth factors may regulate different steps in this cascade (10). Recent studies suggest regulatory roles for PDGF, aFGF, bFGF, and TGF-beta in the initiation and the development of the fracture callus. Fracture healing begins immediately following injury, when growth factors, including TGF-beta 1 and PDGF, are released into the fracture hematoma by platelets and inflammatory cells. TGF-beta 1 and FGF are synthesized by osteoblasts and chondrocytes throughout the healing process. TGF-beta 1 and PDGF appear to have an influence on the initiation of fracture repair and the formation of cartilage and intramembranous bone in the initiation of callus formation. Acidic FGF is synthesized by chondrocytes, chondrocyte precursors, and macrophages. It appears to stimulate the proliferation of immature chondrocytes or precursors, and indirectly regulates chondrocyte maturation and the expression of the cartilage matrix. Presumably, growth factors in the callus at later times regulate additional steps in repair of the bone after fracture. These studies suggest that growth factors are central regulators of cellular proliferation, differentiation, and extracellular matrix synthesis during fracture repair. Abnormal growth factor expression has been implicated as causing impaired or abnormal healing in other tissues, suggesting that altered growth factor expression also may be responsible for abnormal or delayed fracture repair. As a complete understanding of fracture-healing regulation evolves, we expect new insights into the etiology of abnormal or delayed fracture healing, and possibly new therapies for these difficult clinical problems.", "title": "" }, { "docid": "55158927c639ed62b53904b97a0f7a97", "text": "Speech comprehension and production are governed by control processes. We explore their nature and dynamics in bilingual speakers with a focus on speech production. Prior research indicates that individuals increase cognitive control in order to achieve a desired goal. In the adaptive control hypothesis we propose a stronger hypothesis: Language control processes themselves adapt to the recurrent demands placed on them by the interactional context. Adapting a control process means changing a parameter or parameters about the way it works (its neural capacity or efficiency) or the way it works in concert, or in cascade, with other control processes (e.g., its connectedness). We distinguish eight control processes (goal maintenance, conflict monitoring, interference suppression, salient cue detection, selective response inhibition, task disengagement, task engagement, opportunistic planning). We consider the demands on these processes imposed by three interactional contexts (single language, dual language, and dense code-switching). We predict adaptive changes in the neural regions and circuits associated with specific control processes. A dual-language context, for example, is predicted to lead to the adaptation of a circuit mediating a cascade of control processes that circumvents a control dilemma. Effective test of the adaptive control hypothesis requires behavioural and neuroimaging work that assesses language control in a range of tasks within the same individual.", "title": "" }, { "docid": "05b716c1e84b842710b07e06731beed7", "text": "_____________________________________________________________________________ Corporate boards are comprised of individual directors but make decisions as a group. The quality of their decisions affects firm value. In this study, we focus on one aspect of board structure–– director overlap––the overlap in service for a given pair of directors in a given firm, averaged across all director pairs in the firm. Greater overlap among directors can lead to negative synergies through groupthink, a mode of thinking by highly cohesive groups where the desire for consensus potentially overrides critical evaluation of all possible alternatives. Alternatively, greater overlap can lead to positive synergies through a reduction in coordination and communication costs, resulting in more effective teamwork. We hypothesize that: (i) director overlap will have a more negative effect on firm value for dynamic firms, which value critical thinking and hence stand to lose more from groupthink; and (ii) director overlap will have a more positive effect on firm value in complex firms, which have higher coordination costs and hence benefit from better teamwork. We find results consistent with our predictions. Our results have implications for the term limits of directors because term limits impose a ceiling on director overlap. ______________________________________________________________________________ JEL Classifications: G32; G34; K22", "title": "" }, { "docid": "97a7c48145d682a9ed45109d83c82a73", "text": "We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community.", "title": "" } ]
scidocsrr
8a9a960688dfbd0bb9ac38020efe8bc4
Fingerprint Recognition Using Minutia Score Matching
[ { "docid": "0a9debb7b20310f2f693b5c2b9a03576", "text": "minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.", "title": "" } ]
[ { "docid": "fd392f5198794df04c70da6bc7fe2f0d", "text": "Performance tuning in modern database systems requires a lot of expertise, is very time consuming and often misdirected. Tuning attempts often lack a methodology that has a holistic view of the database. The absence of historical diagnostic information to investigate performance issues at first occurrence exacerbates the whole tuning process often requiring that problems be reproduced before they can be correctly diagnosed. In this paper we describe how Oracle overcomes these challenges and provides a way to perform automatic performance diagnosis and tuning. We define a new measure called ‘Database Time’ that provides a common currency to gauge the performance impact of any resource or activity in the database. We explain how the Automatic Database Diagnostic Monitor (ADDM) automatically diagnoses the bottlenecks affecting the total database throughput and provides actionable recommendations to alleviate them. We also describe the types of performance measurements that are required to perform an ADDM analysis. Finally we show how ADDM plays a central role within Oracle 10g’s manageability framework to self-manage a database and provide a comprehensive tuning solution.", "title": "" }, { "docid": "587ca964abb5708c896e2e4475116a6d", "text": "The design and implementation of software for medical devices is challenging due to the closed-loop interaction with the patient, which is a stochastic physical environment. The safety-critical nature and the lack of existing industry standards for verification make this an ideal domain for exploring applications of formal modeling and closed-loop analysis. The biggest challenge is that the environment model(s) have to be both complex enough to express the physiological requirements and general enough to cover all possible inputs to the device. In this effort, we use a dual chamber implantable pacemaker as a case study to demonstrate verification of software specifications of medical devices as timed-automata models in UPPAAL. The pacemaker model is based on the specifications and algorithm descriptions from Boston Scientific. The heart is modeled using timed automata based on the physiology of heart. The model is gradually abstracted with timed simulation to preserve properties. A manual Counter-Example-Guided Abstraction and Refinement (CEGAR) framework has been adapted to refine the heart model when spurious counter-examples are found. To demonstrate the closed-loop nature of the problem and heart model refinement, we investigated two clinical cases of Pacemaker Mediated Tachycardia and verified their corresponding correction algorithms in the pacemaker. Along with our tools for code generation from UPPAAL models, this effort enables model-driven design and certification of software for medical devices.", "title": "" }, { "docid": "a0d4d6c36cab8c5ed5be69bea1d8f302", "text": "In this paper, we propose a simple, fast decoding algorithm that fosters diversity in neural generation. The algorithm modifies the standard beam search algorithm by adding an intersibling ranking penalty, favoring choosing hypotheses from diverse parents. We evaluate the proposed model on the tasks of dialogue response generation, abstractive summarization and machine translation. We find that diverse decoding helps across all tasks, especially those for which reranking is needed. We further propose a variation that is capable of automatically adjusting its diversity decoding rates for different inputs using reinforcement learning (RL). We observe a further performance boost from this RL technique.1", "title": "" }, { "docid": "8afd1ab45198e9960e6a047091a2def8", "text": "We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied.", "title": "" }, { "docid": "4ae4aa05befe374ab4e06d1c002efb53", "text": "The convincing development in Internet of Things (IoT) enables the solutions to spur the advent of novel and fascinating applications. The main aim is to integrate IoT aware architecture to enhance smart healthcare systems for automatic environmental monitoring of hospital and patient health. Staying true to the IoT vision, we propose a smart hospital system (SHS), which relies on different, yet complimentary, technologies, specifically RFID, WSN and smart mobile, interoperating with each other through a Constrained Application Protocol (CoAP)/IPv6 over low-power wireless personal area network (6LoWPAN)/representational state transfer (REST) network infrastructure. RADIO frequency identification technologies have been increasingly used in various applications, such as inventory control, and object tracking. An RFID system typically consist of one or several readers and numerous tags. Each tag has a unique ID. The proposed SHS has highlighted a number of key capabilities and aspects of novelty, which represent a significant step forward.", "title": "" }, { "docid": "91cb2ee27517441704bf739ee811d6c6", "text": "The primo vascular system has a specific anatomical and immunohistochemical signature that sets it apart from the arteriovenous and lymphatic systems. With immune and endocrine functions, the primo vascular system has been found to play a large role in biological processes, including tissue regeneration, inflammation, and cancer metastases. Although scientifically confirmed in 2002, the original discovery was made in the early 1960s by Bong-Han Kim, a North Korean scientist. It would take nearly 40 years after that discovery for scientists to revisit Kim's research to confirm the early findings. The presence of primo vessels in and around blood and lymph vessels, nerves, viscera, and fascia, as well as in the brain and spinal cord, reveals a common link that could potentially open novel possibilities of integration with cranial, lymphatic, visceral, and fascial approaches in manual medicine.", "title": "" }, { "docid": "779d5380c72827043111d00510e32bfd", "text": "OBJECTIVE\nThe purpose of this review is 2-fold. The first is to provide a review for physiatrists already providing care for women with musculoskeletal pelvic floor pain and a resource for physiatrists who are interested in expanding their practice to include this patient population. The second is to describe how musculoskeletal dysfunctions involving the pelvic floor can be approached by the physiatrist using the same principles used to evaluate and treat others dysfunctions in the musculoskeletal system. This discussion clarifies that evaluation and treatment of pelvic floor pain of musculoskeletal origin is within the scope of practice for physiatrists. The authors review the anatomy of the pelvic floor, including the bony pelvis and joints, muscle and fascia, and the peripheral and autonomic nervous systems. Pertinent history and physical examination findings are described. The review concludes with a discussion of differential diagnosis and treatment of musculoskeletal pelvic floor pain in women. Improved recognition of pelvic floor dysfunction by healthcare providers will reduce impairment and disability for women with pelvic floor pain. A physiatrist is in the unique position to treat the musculoskeletal causes of this condition because it requires an expert grasp of anatomy, function, and the linked relationship between the spine and pelvis. Further research regarding musculoskeletal causes and treatment of pelvic floor pain will help validate these concepts and improve awareness and care for women limited by this condition.", "title": "" }, { "docid": "b84971bc1f2d2ebf43815d33cea86c8c", "text": "The container-inhabiting mosquito simulation model (CIMSiM) is a weather-driven, dynamic life table simulation model of Aedes aegypti (L.) and similar nondiapausing Aedes mosquitoes that inhabit artificial and natural containers. This paper presents a validation of CIMSiM simulating Ae. aegypti using several independent series of data that were not used in model development. Validation data sets include laboratory work designed to elucidate the role of diet on fecundity and rates of larval development and survival. Comparisons are made with four field studies conducted in Bangkok, Thailand, on seasonal changes in population dynamics and with a field study in New Orleans, LA, on larval habitat. Finally, predicted ovipositional activity of Ae. aegypti in seven cities in the southeastern United States for the period 1981-1985 is compared with a data set developed by the U.S. Public Health Service. On the basis of these comparisons, we believe that, for stated design goals, CIMSiM adequately simulates the population dynamics of Ae. aegypti in response to specific information on weather and immature habitat. We anticipate that it will be useful in simulation studies concerning the development and optimization of control strategies and that, with further field validation, can provide entomological inputs for a dengue virus transmission model.", "title": "" }, { "docid": "c06c067294cbb7bbc129324591d2636c", "text": "In this article, we propose a new method for localizing optic disc in retinal images. Localizing the optic disc and its center is the first step of most vessel segmentation, disease diagnostic, and retinal recognition algorithms. We use optic disc of the first four retinal images in DRIVE dataset to extract the histograms of each color component. Then, we calculate the average of histograms for each color as template for localizing the center of optic disc. The DRIVE, STARE, and a local dataset including 273 retinal images are used to evaluate the proposed algorithm. The success rate was 100, 91.36, and 98.9%, respectively.", "title": "" }, { "docid": "173811394fd49c15b151fc9059acbe13", "text": "The 'jewel in the crown' from the MIT90s [Management in the 90s] program is undoubtedly the Strategic Alignment Model (SAM) of Henderson and Venkatraman.", "title": "" }, { "docid": "61615273dad80e5a0a95ecbe3002fd72", "text": "Other than serving as building blocks for DNA and RNA, purine metabolites provide a cell with the necessary energy and cofactors to promote cell survival and proliferation. A renewed interest in how purine metabolism may fuel cancer progression has uncovered a new perspective into how a cell regulates purine need. Under cellular conditions of high purine demand, the de novo purine biosynthetic enzymes cluster near mitochondria and microtubules to form dynamic multienzyme complexes referred to as 'purinosomes'. In this review, we highlight the purinosome as a novel level of metabolic organization of enzymes in cells, its consequences for regulation of purine metabolism, and the extent that purine metabolism is being targeted for the treatment of cancers.", "title": "" }, { "docid": "c3ee2beee84cd32e543c4b634062eeac", "text": "In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.", "title": "" }, { "docid": "0d4deabaaf6f78b16c4880e6179a76d8", "text": "Alcohol drinking has been associated with increased blood pressure in epidemiological studies. We conducted a meta-analysis of randomized controlled trials to assess the effects of alcohol reduction on blood pressure. We included 15 randomized control trials (total of 2234 participants) published before June 1999 in which alcohol reduction was the only intervention difference between active and control treatment groups. Using a standard protocol, information on sample size, participant characteristics, study design, intervention methods, duration, and treatment results was abstracted independently by 3 investigators. By means of a fixed-effects model, findings from individual trials were pooled after results for each trial were weighted by the inverse of its variance. Overall, alcohol reduction was associated with a significant reduction in mean (95% confidence interval) systolic and diastolic blood pressures of -3.31 mm Hg (-2.52 to -4.10 mm Hg) and -2.04 mm Hg (-1.49 to -2.58 mm Hg), respectively. A dose-response relationship was observed between mean percentage of alcohol reduction and mean blood pressure reduction. Effects of intervention were enhanced in those with higher baseline blood pressure. Our study suggests that alcohol reduction should be recommended as an important component of lifestyle modification for the prevention and treatment of hypertension among heavy drinkers.", "title": "" }, { "docid": "2dfb8e3f50c1968b441872fa4aa13fec", "text": "An ultra-wideband Vivaldi antenna with dual-polarization capability is presented. A two-section quarter-wave balun feedline is developed to feed the tapered slot antenna, which improves the impedance matching performance especially in the low frequency regions. The dual-polarization is realized by orthogonally combining two identical Vivaldi antennas without a galvanic contact. Measured results have been presented with a fractional bandwidth of 172% from 0.56 GHz to 7.36 GHz for S11 < −10 dB and a good port isolation of S21 < −22 dB. The measured antenna gain of up to 9.4 dBi and cross-polarization discrimination (XPD) of more than 18 dB is achieved, making the antenna suitable for mobile communication testing in chambers or open-site facilities.", "title": "" }, { "docid": "d35082d022280d25eea3e98596b70839", "text": "OVERVIEW 795 DEFINING PROPERTIES OF THE BIOECOLOGICAL MODEL 796 Proposition I 797 Proposition II 798 FROM THEORY TO RESEARCH DESIGN: OPERATIONALIZING THE BIOECOLOGICAL MODEL 799 Developmental Science in the Discovery Mode 801 Different Paths to Different Outcomes: Dysfunction versus Competence 803 The Role of Experiments in the Bioecological Model 808 HOW DO PERSON CHARACTERISTICS INFLUENCE LATER DEVELOPMENT? 810 Force Characteristics as Shapers of Development 810 Resource Characteristics of the Person as Shapers of Development 812 Demand Characteristics of the Person as Developmental Inf luences 812 THE ROLE OF FOCUS OF ATTENTION IN PROXIMAL PROCESSES 813 PROXIMAL PROCESSES IN SOLO ACTIVITIES WITH OBJECTS AND SYMBOLS 814 THE MICROSYSTEM MAGNIFIED: ACTIVITIES, RELATIONSHIPS, AND ROLES 814 Effects of the Physical Environment on Psychological Development 814 The Mother-Infant Dyad as a Context of Development 815 BEYOND THE MICROSYSTEM 817 The Expanding Ecological Universe 818 Nature-Nurture Reconceptualized: A Bioecological Interpretation 819 TIME IN THE BIOECOLOGICAL MODEL: MICRO-, MESO-, AND MACROCHRONOLOGICAL SYSTEMS 820 FROM RESEARCH TO REALITY 822 THE BIOECOLOGICAL MODEL: A DEVELOPMENTAL ASSESSMENT 824 REFERENCES 825", "title": "" }, { "docid": "14f127a8dd4a0fab5acd9db2a3924657", "text": "Pesticides (herbicides, fungicides or insecticides) play an important role in agriculture to control the pests and increase the productivity to meet the demand of foods by a remarkably growing population. Pesticides application thus became one of the important inputs for the high production of corn and wheat in USA and UK, respectively. It also increased the crop production in China and India [1-4]. Although extensive use of pesticides improved in securing enough crop production worldwide however; these pesticides are equally toxic or harmful to nontarget organisms like mammals, birds etc and thus their presence in excess can cause serious health and environmental problems. Pesticides have thus become environmental pollutants as they are often found in soil, water, atmosphere and agricultural products, in harmful levels, posing an environmental threat. Its residual presence in agricultural products and foods can also exhibit acute or chronic toxicity on human health. Even at low levels, it can cause adverse effects on humans, plants, animals and ecosystems. Thus, monitoring of these pesticide and its residues become extremely important to ensure that agricultural products have permitted levels of pesticides [5-6]. Majority of pesticides belong to four classes, namely organochlorines, organophosphates, carbamates and pyrethroids. Organophosphates pesticides are a class of insecticides, of which many are highly toxic [7]. Until the 21st century, they were among the most widely used insecticides which included parathion, malathion, methyl parathion, chlorpyrifos, diazinon, dichlorvos, dimethoate, monocrotophos and profenofos. Organophosphate pesticides cause toxicity by inhibiting acetylcholinesterase enzyme [8]. It acts as a poison to insects and other animals, such as birds, amphibians and mammals, primarily by phosphorylating the acetylcholinesterase enzyme (AChE) present at nerve endings. This leads to the loss of available AChE and because of the excess acetylcholine (ACh, the impulse-transmitting substance), the effected organ becomes over stimulated. The enzyme is critical to control the transmission of nerve impulse from nerve fibers to the smooth and skeletal muscle cells, secretary cells and autonomic ganglia, and within the central nervous system (CNS). Once the enzyme reaches a critical level due to inactivation by phosphorylation, symptoms and signs of cholinergic poisoning get manifested [9].", "title": "" }, { "docid": "1e2768be2148ff1fd102c6621e8da14d", "text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.", "title": "" }, { "docid": "25b77292def9ba880fecb58a38897400", "text": "In this paper, we present a successful operation of Gallium Nitride(GaN)-based three-phase inverter with high efficiency of 99.3% for driving motor at 900W under the carrier frequency of 6kHz. This efficiency well exceeds the value by IGBT (Insulated Gate Bipolar Transistor). This demonstrates that GaN has a great potential for power switching application competing with SiC. Fully reduced on-state resistance in a new normally-off GaN transistor called Gate Injection Transistor (GIT) greatly helps to increase the efficiency. In addition, use of the bidirectional operation of the lateral and compact GITs with synchronous gate driving, the inverter is operated free from fly-wheel diodes which have been connected in parallel with IGBTs in a conventional inverter system.", "title": "" }, { "docid": "394854761e27aa7baa6fa2eea60f347d", "text": "Our goal is to complement an entity ranking with human-readable explanations of how those retrieved entities are connected to the information need. Relation extraction technology should aid in finding such support passages, especially in combination with entities and query terms. This work explores how the current state of the art in unsupervised relation extraction (OpenIE) contributes to a solution for the task, assessing potential, limitations, and avenues for further investigation.", "title": "" }, { "docid": "daf751e821c730db906c40ccf4678a90", "text": "Data provided by Internet of Things (IoT) are time series and have some specific characteristics that must be considered with regard to storage and management. IoT data is very likely to be stored in NoSQL system databases where there are some particular engine and compaction strategies to manage time series data. In this article, two of these strategies found in the open source Cassandra database system are described, analyzed and compared. The configuration of these strategies is not trivial and may be very time consuming. To provide indicators, the strategy with the best time performance had its main parameter tested along 14 different values and results are shown, related to both response time and storage space needed. The results may help users to configure their IoT NoSQL databases in an efficient setup, may help designers to improve database compaction strategies or encourage the community to set new default values for the compaction strategies.", "title": "" } ]
scidocsrr
80fdd5b3d91cfc2c6e561cdf529eabb5
Artificial Roughness Encoding with a Bio-inspired MEMS- based Tactile Sensor Array
[ { "docid": "f3ee129af2a833f8775c5366c188d71c", "text": "Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllability—in addition to poor cosmetic appearance—are the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumb–finger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the user’s intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control.", "title": "" } ]
[ { "docid": "afffadc35ac735d11e1a415c93d1c39f", "text": "We examine self-control problems — modeled as time-inconsistent, presentbiased preferences—in a model where a person must do an activity exactly once. We emphasize two distinctions: Do activities involve immediate costs or immediate rewards, and are people sophisticated or naive about future self-control problems? Naive people procrastinate immediate-cost activities and preproperate—do too soon—immediate-reward activities. Sophistication mitigates procrastination, but exacerbates preproperation. Moreover, with immediate costs, a small present bias can severely harm only naive people, whereas with immediate rewards it can severely harm only sophisticated people. Lessons for savings, addiction, and elsewhere are discussed. (JEL A12, B49, C70, D11, D60, D74, D91, E21)", "title": "" }, { "docid": "ba0fab446ba760a4cb18405a05cf3979", "text": "Please c Disaster Summary. — This study aims at understanding the role of education in promoting disaster preparedness. Strengthening resilience to climate-related hazards is an urgent target of Goal 13 of the Sustainable Development Goals. Preparing for a disaster such as stockpiling of emergency supplies or having a family evacuation plan can substantially minimize loss and damages from natural hazards. However, the levels of household disaster preparedness are often low even in disaster-prone areas. Focusing on determinants of personal disaster preparedness, this paper investigates: (1) pathways through which education enhances preparedness; and (2) the interplay between education and experience in shaping preparedness actions. Data analysis is based on face-to-face surveys of adults aged 15 years in Thailand (N = 1,310) and the Philippines (N = 889, female only). Controlling for socio-demographic and contextual characteristics, we find that formal education raises the propensity to prepare against disasters. Using the KHB method to further decompose the education effects, we find that the effect of education on disaster preparedness is mainly mediated through social capital and disaster risk perception in Thailand whereas there is no evidence that education is mediated through observable channels in the Philippines. This suggests that the underlying mechanisms explaining the education effects are highly context-specific. Controlling for the interplay between education and disaster experience, we show that education raises disaster preparedness only for those households that have not been affected by a disaster in the past. Education improves abstract reasoning and anticipation skills such that the better educated undertake preventive measures without needing to first experience the harmful event and then learn later. In line with recent efforts of various UN agencies in promoting education for sustainable development, this study provides a solid empirical evidence showing positive externalities of education in disaster risk reduction. 2017TheAuthors.PublishedbyElsevierLtd.This is an open access article under theCCBY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "07a42e7b4c5bc8088e9ff9b57c46f5fb", "text": "In this paper, the concept of divergent component of motion (DCM, also called “Capture Point”) is extended to 3-D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external forces and the total force (i.e., external plus gravitational forces) acting on the robot. Based on eCMP, VRP, and DCM, we present methods for real-time planning and tracking control of DCM trajectories in 3-D. The basic DCM trajectory generator is extended to produce continuous leg force profiles and to facilitate the use of toe-off motion during double support. The robustness of the proposed control framework is thoroughly examined, and its capabilities are verified both in simulations and experiments.", "title": "" }, { "docid": "4523c880e099da9bbade4870da04f0c4", "text": "Despite the hype about blockchains and distributed ledgers, formal abstractions of these objects are scarce1. To face this issue, in this paper we provide a proper formulation of a distributed ledger object. In brief, we de ne a ledger object as a sequence of records, and we provide the operations and the properties that such an object should support. Implemen- tation of a ledger object on top of multiple (possibly geographically dispersed) computing devices gives rise to the distributed ledger object. In contrast to the centralized object, dis- tribution allows operations to be applied concurrently on the ledger, introducing challenges on the consistency of the ledger in each participant. We provide the de nitions of three well known consistency guarantees in terms of the operations supported by the ledger object: (1) atomic consistency (linearizability), (2) sequential consistency, and (3) eventual consistency. We then provide implementations of distributed ledgers on asynchronous message passing crash- prone systems using an Atomic Broadcast service, and show that they provide eventual, sequen- tial or atomic consistency semantics respectively. We conclude with a variation of the ledger the validated ledger which requires that each record in the ledger satis es a particular validation rule.", "title": "" }, { "docid": "7e91815398915670fadba3c60e772d14", "text": "Online reviews are valuable resources not only for consumers to make decisions before purchase, but also for providers to get feedbacks for their services or commodities. In Aspect Based Sentiment Analysis (ABSA), it is critical to identify aspect categories and extract aspect terms from the sentences of user-generated reviews. However, the two tasks are often treated independently, even though they are closely related. Intuitively, the learned knowledge of one task should inform the other learning task. In this paper, we propose a multi-task learning model based on neural networks to solve them together. We demonstrate the improved performance of our multi-task learning model over the models trained separately on three public dataset released by SemEval work-", "title": "" }, { "docid": "30740e33cdb2c274dbd4423e8f56405e", "text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.", "title": "" }, { "docid": "af4db4d9be3f652445a47e2985070287", "text": "BACKGROUND\nSurgical Site Infections (SSIs) are infections of incision or deep tissue at operation sites. These infections prolong hospitalization, delay wound healing, and increase the overall cost and morbidity.\n\n\nOBJECTIVES\nThis study aimed to investigate anaerobic and aerobic bacteria prevalence in surgical site infections and determinate antibiotic susceptibility pattern in these isolates.\n\n\nMATERIALS AND METHODS\nOne hundred SSIs specimens were obtained by needle aspiration from purulent material in depth of infected site. These specimens were cultured and incubated in both aerobic and anaerobic condition. For detection of antibiotic susceptibility pattern in aerobic and anaerobic bacteria, we used disk diffusion, agar dilution, and E-test methods.\n\n\nRESULTS\nA total of 194 bacterial strains were isolated from 100 samples of surgical sites. Predominant aerobic and facultative anaerobic bacteria isolated from these specimens were the members of Enterobacteriaceae family (66, 34.03%) followed by Pseudomonas aeruginosa (26, 13.4%), Staphylococcus aureus (24, 12.37%), Acinetobacter spp. (18, 9.28%), Enterococcus spp. (16, 8.24%), coagulase negative Staphylococcus spp. (14, 7.22%) and nonhemolytic streptococci (2, 1.03%). Bacteroides fragilis (26, 13.4%), and Clostridium perfringens (2, 1.03%) were isolated as anaerobic bacteria. The most resistant bacteria among anaerobic isolates were B. fragilis. All Gram-positive isolates were susceptible to vancomycin and linezolid while most of Enterobacteriaceae showed sensitivity to imipenem.\n\n\nCONCLUSIONS\nMost SSIs specimens were polymicrobial and predominant anaerobic isolate was B. fragilis. Isolated aerobic and anaerobic strains showed high level of resistance to antibiotics.", "title": "" }, { "docid": "044de981e34f0180accfb799063a7ec1", "text": "This paper proposes a novel hybrid full-bridge three-level LLC resonant converter. It integrates the advantages of the hybrid full-bridge three-level converter and the LLC resonant converter. It can operate not only under three-level mode but also under two-level mode, so it is very suitable for wide input voltage range application, such as fuel cell power system. The input current ripple and output filter can also be reduced. Three-level leg switches just sustain only half of the input voltage. ZCS is achieved for the rectifier diodes, and the voltage stress across the rectifier diodes can be minimized to the output voltage. The main switches can realize ZVS from zero to full load. A 200-400 V input, 360 V/4 A output prototype converter is built in our lab to verify the operation principle of the proposed converter", "title": "" }, { "docid": "427ebc0500e91e842873c4690cdacf79", "text": "Bounding volume hierarchy (BVH) has been widely adopted as the acceleration structure in broad-phase collision detection. Previous state-of-the-art BVH-based collision detection approaches exploited the spatio-temporal coherence of simulations by maintaining a bounding volume test tree (BVTT) front. A major drawback of these algorithms is that large deformations in the scenes decrease culling efficiency and slow down collision queries. Moreover, for front-based methods, the inefficient caching on GPU caused by the arbitrary layout of BVH and BVTT front nodes becomes a critical performance issue. We present a fast and robust BVH-based collision detection scheme on GPU that addresses the above problems by ordering and restructuring BVHs and BVTT fronts. Our techniques are based on the use of histogram sort and an auxiliary structure BVTT front log, through which we analyze the dynamic status of BVTT front and BVH quality. Our approach efficiently handles interand intra-object collisions and performs especially well in simulations where there is considerable spatio-temporal coherence. The benchmark results demonstrate that our approach is significantly faster than the previous BVH-based method, and also outperforms other state-of-the-art spatial subdivision schemes in terms of speed. CCS Concepts •Computing methodologies → Collision detection; Physical simulation;", "title": "" }, { "docid": "c447e34a5048c7fe2d731aaa77b87dd3", "text": "Bullying, in both physical and cyber worlds, has been recognized as a serious health issue among adolescents. Given its significance, scholars are charged with identifying factors that influence bullying involvement in a timely fashion. However, previous social studies of bullying are handicapped by data scarcity. The standard psychological science approach to studying bullying is to conduct personal surveys in schools. The sample size is typically in the hundreds, and these surveys are often collected only once. On the other hand, the few computational studies narrowly restrict themselves to cyberbullying, which accounts for only a small fraction of all bullying episodes.", "title": "" }, { "docid": "0eec3e2c266f6c8dd39b38320a4e70fa", "text": "The development of Urdu Nastalique O Character Recognition (OCR) is a challenging task due to the cursive nature of Urdu, complexities of Nastalique writing style and layouts of Urdu document images. In this paper, the framework of Urdu Nastalique OCR is presented. The presented system supports the recognition of Urdu Nastalique document images having font size between 14 to 44. has 86.15% ligature recognition accuracy tested on 224 document images.", "title": "" }, { "docid": "c2fc4e65c484486f5612f4006b6df102", "text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.", "title": "" }, { "docid": "924eb275a1205dbf7907a58fc1cee5b6", "text": "BACKGROUND\nNutrient status of B vitamins, particularly folate and vitamin B-12, may be related to cognitive ageing but epidemiological evidence remains inconclusive.\n\n\nOBJECTIVE\nThe aim of this study was to estimate the association of serum folate and vitamin B-12 concentrations with cognitive function in middle-aged and older adults from three Central and Eastern European populations.\n\n\nMETHODS\nMen and women aged 45-69 at baseline participating in the Health, Alcohol and Psychosocial factors in Eastern Europe (HAPIEE) study were recruited in Krakow (Poland), Kaunas (Lithuania) and six urban centres in the Czech Republic. Tests of immediate and delayed recall, verbal fluency and letter search were administered at baseline and repeated in 2006-2008. Serum concentrations of biomarkers at baseline were measured in a sub-sample of participants. Associations of vitamin quartiles with baseline (n=4166) and follow-up (n=2739) cognitive domain-specific z-scores were estimated using multiple linear regression.\n\n\nRESULTS\nAfter adjusting for confounders, folate was positively associated with letter search and vitamin B-12 with word recall in cross-sectional analyses. In prospective analyses, participants in the highest quartile of folate had higher verbal fluency (p<0.01) and immediate recall (p<0.05) scores compared to those in the bottom quartile. In addition, participants in the highest quartile of vitamin B-12 had significantly higher verbal fluency scores (β=0.12; 95% CI=0.02, 0.21).\n\n\nCONCLUSIONS\nFolate and vitamin B-12 were positively associated with performance in some but not all cognitive domains in older Central and Eastern Europeans. These findings do not lend unequivocal support to potential importance of folate and vitamin B-12 status for cognitive function in older age. Long-term longitudinal studies and randomised trials are required before drawing conclusions on the role of these vitamins in cognitive decline.", "title": "" }, { "docid": "34a46b80f025cd8cd25243a777b4ff6a", "text": "This research attempts to investigate the effects of blog marketing on brand attitude and purchase intention. The elements of blog marketing are identified as community identification, interpersonal trust, message exchange, and two-way communication. The relationships among variables are pictured on the fundamental research framework provided by this study. Data were collected via an online questionnaire and 727 useable samples were collected and analyzed utilizing AMOS 5.0. The empirical findings show that the blog marketing elements can impact on brand attitude positively except for the element of community identification. Further, the analysis result also verifies the moderating effects on the relationship between blog marketing elements and brand attitude.", "title": "" }, { "docid": "f1cb1df8ad0b78f0f47b2cfcf2e9c5b6", "text": "Quantitative performance analysis in sports has become mainstream in the last decade. The focus of the analyses is shifting towards more sport-speci ic metrics due to novel technologies. These systems measure the movements of the players and the events happening during trainings and games. This allows for a more detailed evaluation of professional athletes with implications on areas such as opponent scouting, planning of training sessions, or player scouting. Previousworks that analyze soccer-related logs focus on the game-relatedperformanceof theplayers and teams. Vast majority of these methodologies concentrate on descriptive statistics that capture some part of the players’ strategy. For example, in case of soccer, the average number of shots, goals, fouls, passes are derived both for the teams and for the players [1, 5]. Other works identify and analyze the outcome of the strategies that teams apply [18, 16, 13, 11, 9, 24, 14]. However, the physical performance and in particular the movements of players has not received detailed attention yet. It is challenging to get access to datasets related to the physical performance of soccer players. The teams consider such information highly con idential, especially if it covers in-game performance. Despite the fact that numerous teams deployed player tracking systems in their stadiums, datasets of this nature are not available for research or for public usage. It is nearly impossible to havequantitative information on the physical performance of all the teams of a competition. Hence, most of the analysis and evaluation of the players’ performance do not contain much information on the physical aspect of the game, creating a blindspot in performance analysis. We propose a novelmethod to solve this issue by derivingmovement characteristics of soccer players. We use event-based datasets from data provider companies covering 50+ soccer leagues allowing us to analyze the movement pro iles of potentially tens of thousands of players without any major investment. Our methodology does not require expensive, dedicated player tracking system deployed in the stadium. Instead, if the game is broadcasted, our methodology can be used. As a consequence, our technique does not require the consent of the involved teams yet it can provide insights on the physical performance of many players in different teams. The main contribution of our work is threefold:", "title": "" }, { "docid": "5eb526843c41d2549862b60c17110b5b", "text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.", "title": "" }, { "docid": "7fa8d82b55c5ae2879123380ef1a8505", "text": "In the general context of Knowledge Discovery, speciic techniques , called Text Mining techniques, are necessary to extract information from unstructured textual data. The extracted information can then be used for the classiication of the content of large textual bases. In this paper, we present two examples of information that can be automatically extracted from text collections: probabilistic associations of keywords and prototypical document instances. The Natural Language Processing (NLP) tools necessary for such extractions are also presented.", "title": "" }, { "docid": "3038334926608dbe4cdb091cf0e955eb", "text": "Cloud computing has undergone rapid expansion throughout the last decade. Many companies and organizations have made the transition from tra­ ditional data centers to the cloud due to its flexibility and lower cost. However, traditional data centers are still being relied upon by those who are less certain about the security of cloud. This problem is highlighted by the fact that there only exist limited efforts on threat modeling for cloud data centers. In this paper, we conduct comprehensive threat modeling exercises based on two representative cloud infrastructures using several popular threat modeling methods, including attack surface, attack trees, attack graphs, and security metrics based on attack trees and attack graphs, respectively. Those threat modeling efforts provide cloud providers practical lessons and means toward better evaluating, understanding, and improving their cloud infrastructures. Our results may also imbed more con­ fidence in potential cloud tenants by providing them a clearer picture about po­ tential threats in cloud infrastructures and corresponding solutions.", "title": "" }, { "docid": "9f04ac4067179aadf5e429492c7625e9", "text": "We provide a model that links an asset’s market liquidity — i.e., the ease with which it is traded — and traders’ funding liquidity — i.e., the ease with which they can obtain funding. Traders provide market liquidity, and their ability to do so depends on their availability of funding. Conversely, traders’ funding, i.e., their capital and the margins they are charged, depend on the assets’ market liquidity. We show that, under certain conditions, margins are destabilizing and market liquidity and funding liquidity are mutually reinforcing, leading to liquidity spirals. The model explains the empirically documented features that market liquidity (i) can suddenly dry up, (ii) has commonality across securities, (iii) is related to volatility, (iv) is subject to “flight to quality”, and (v) comoves with the market, and it provides new testable predictions.", "title": "" }, { "docid": "fa20d7bf8a6e99691a42dcd756ed1cc6", "text": "IoT (Internet of Things) is acommunication network that connects physical or things to each other or with a group all together. The use is widely popular nowadays and its usage has expanded into interesting subjects. Especially, it is getting more popular to research in cross subjects such as mixing smart systems with computer sciences and engineering applications together. Object detection is one of these subjects. Realtime object detection is one of the foremost interesting subjects because of its compute costs. Gaps in methodology, unknown concepts and insufficiency in mathematical modeling makes it harder for designing these computing algorithms. Algortihms in these applications can be developed with in machine learning and/or numerical methods that are available in scientific literature. These operations are possible only if communication of objects within theirselves in physical space and awareness of the objects nearby. Artificial Neural Networks may help in these studies. In this study, yolo algorithm which is seen as a key element for real-time object detection in IoT is researched. It is realized and shown in results that optimization of computing and analyzation of system aside this research which takes Yolo algorithm as a foundation point [10]. As a result, it is seen that our model approach has an interesting potential and novelty.", "title": "" } ]
scidocsrr
c2ee1f1e8bc5b50cdb12761b88029339
Business Process Analytics
[ { "docid": "4ca4ccd53064c7a9189fef3e801612a0", "text": "workflows, data warehousing, business intelligence Process design and automation technologies are being increasingly used by both traditional and newly-formed, Internet-based enterprises in order to improve the quality and efficiency of their administrative and production processes, to manage e-commerce transactions, and to rapidly and reliably deliver services to businesses and individual customers.", "title": "" } ]
[ { "docid": "1381104da316d0e1b66fce7f3b51a153", "text": "Automatic segmentation and quantification of skeletal structures has a variety of applications for biological research. Although solutions for good quality X-ray images of human skeletal structures are in existence in recent years, automatic solutions working on poor quality X-ray images of mice are rare. This paper proposes a fully automatic solution for spine segmentation and curvature quantification from X-ray images of mice. The proposed solution consists of three stages, namely preparation of the region of interest, spine segmentation, and spine curvature quantification, aiming to overcome technical difficulties in processing the X-ray images. We examined six different automatic measurements for quantifying the spine curvature through tests on a sample data set of 100 images. The experimental results show that some of the automatic measures are very close to and consistent with the best manual measurement results by annotators. The test results also demonstrate the effectiveness of the curvature quantification produced by the proposed solution in distinguishing abnormally shaped spines from the normal ones with accuracy up to 98.6%.", "title": "" }, { "docid": "e9db97070b87e567ff7904fe40f30086", "text": "OBJECTIVES\nCongenital adrenal hyperplasia (CAH) is a disease that occurs during fetal development and can lead to virilization in females or death in newborn males if not discovered early in life. Because of this there is a need to seek morphological markers in order to help diagnose the disease. In order to test the hypothesis that prenatal hormones can affect the sexual dimorphic pattern 2D:4D digit ratio in individual with CAH, the aim of this study was to compare the digit ratio in female and male patients with CAH and control subjects.\n\n\nMETHODS\nThe 2D:4D ratios in both hands of 40 patients (31 females-46, XX, and 9 males-46, XY) were compared with the measures of control individuals without CAH (100 males and 100 females).\n\n\nRESULTS\nFemales with CAH showed 2D:4D ratios typical of male controls (0.950 and 0.947) in both hands (P < 0.001). In CAH males the left hand 2D:4D ratio (0.983) was statistically different from that of male controls (P < 0.05).\n\n\nCONCLUSIONS\nThese finding support the idea that sexual dimorphism in skeletal development in early fetal life is associated with differences between the exposure to androgens in males and females, and significant differences associated with adrenal hyperplasia. Although the effects of prenatal androgens on skeletal developmental are supported by numerous studies, further investigation is yet required to clarify the disease and establish the digit ratio as a biomarker for CAH.", "title": "" }, { "docid": "1420f07e309c114dfc264797ab82ceec", "text": "Introduction: The knowledge of clinical spectrum and epidemiological profile of critically ill children plays a significant role in the planning of health policies that would mitigate various factors related to the evolution of diseases prevalent in these sectors. The data collected enable prospective comparisons to be made with benchmark standards including regional and international units for the continuous pursuit of providing essential health care and improving the quality of patient care. Purpose: To study the clinical spectrum and epidemiological profile of the critically ill children admitted to the pediatric intensive care unit at a tertiary care center in South India. Materials and Methods: Descriptive data were collected retrospectively from the Hospital medical records between 2013 and 2016. Results: A total of 1833 patients were analyzed during the 3-year period, of which 1166 (63.6%) were males and 667 (36.4%) were females. A mean duration of stay in pediatric intensive care unit (PICU) was 2.21 ± 1.90 days. Respiratory system was the most common system affected in our study 738 (40.2 %). Acute poisoning in children constituted 99 patients (5.4%). We observed a mortality rate of 1.96%, with no association with age or sex. The mortality rate was highest in infants below 1-year of age (50%). In our study, the leading systemic cause for both admission and death was the respiratory system. Conclusion: This study analyses the epidemiological pattern of patients admitted to PICU in South India. We would also like to emphasize on public health prevention strategies and community health education which needs to be reinforced, especially in remote places and in rural India. This, in turn, would help in decreasing the cases of unknown bites, scorpion sting, poisoning and arthropod-borne illnesses, which are more prevalent in this part of the country.", "title": "" }, { "docid": "46c2d96220d670115f9b4dba4e600ec8", "text": "The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.", "title": "" }, { "docid": "6a1a9c6cb2da06ee246af79fdeedbed9", "text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review", "title": "" }, { "docid": "a112cd31e136054bdf9d34c82b960d95", "text": "We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "b45bb513f7bd9de4941785490945d53e", "text": "Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.", "title": "" }, { "docid": "8930924a223ef6a8d19e52ab5c6e7736", "text": "Modern perception systems are notoriously complex, featuring dozens of interacting parameters that must be tuned to achieve good performance. Conventional tuning approaches require expensive ground truth, while heuristic methods are difficult to generalize. In this work, we propose an introspective ground-truth-free approach to evaluating the performance of a generic perception system. By using the posterior distribution estimate generated by a Bayesian estimator, we show that the expected performance can be estimated efficiently and without ground truth. Our simulated and physical experiments in a demonstrative indoor ground robot state estimation application show that our approach can order parameters similarly to using a ground-truth system, and is able to accurately identify top-performing parameters in varying contexts. In contrast, baseline approaches that reason only about observation log-likelihood fail in the face of challenging perceptual phenomena.", "title": "" }, { "docid": "69bb10420be07fe9fb0fd372c606d04e", "text": "Contextual text mining is concerned with extracting topical themes from a text collection with context information (e.g., time and location) and comparing/analyzing the variations of themes over different contexts. Since the topics covered in a document are usually related to the context of the document, analyzing topical themes within context can potentially reveal many interesting theme patterns. In this paper, we generalize some of these models proposed in the previous work and we propose a new general probabilistic model for contextual text mining that can cover several existing models as special cases. Specifically, we extend the probabilistic latent semantic analysis (PLSA) model by introducing context variables to model the context of a document. The proposed mixture model, called contextual probabilistic latent semantic analysis (CPLSA) model, can be applied to many interesting mining tasks, such as temporal text mining, spatiotemporal text mining, author-topic analysis, and cross-collection comparative analysis. Empirical experiments show that the proposed mixture model can discover themes and their contextual variations effectively.", "title": "" }, { "docid": "242a2f64fc103af641320c1efe338412", "text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.", "title": "" }, { "docid": "471e835e66b1bdfabd5de8a14914e9e6", "text": "Context. The theme of the 2003 annual meeting is \"accountability for educational quality\". The emphasis on accountability reflects the increasing need for educators, students and politicians to demonstrate the effectiveness of educational systems. As part of the growing emphasis on accountability, high stakes achievement tests have become increasingly important and a student's performance on such tests can have a significant impact on his or her access to future educational opportunities. At the same time, concern is growing that the use of high stakes achievement tests, such as the SATMath exam and others (e.g., the Massachusetts MCAS exam) simply exacerbates existing group differences, and puts female students and those from traditionally underrepresented minority groups at a disadvantage (Willingham & Cole, 1997). New approaches are required to help all students perform to the best of their ability on high stakes tests.", "title": "" }, { "docid": "c72a2e504934580f9542a62b7037cdd4", "text": "Software defect prediction is one of the most active research areas in software engineering. We can build a prediction model with defect data collected from a software project and predict defects in the same project, i.e. within-project defect prediction (WPDP). Researchers also proposed cross-project defect prediction (CPDP) to predict defects for new projects lacking in defect data by using prediction models built by other projects. In recent studies, CPDP is proved to be feasible. However, CPDP requires projects that have the same metric set, meaning the metric sets should be identical between projects. As a result, current techniques for CPDP are difficult to apply across projects with heterogeneous metric sets. To address the limitation, we propose heterogeneous defect prediction (HDP) to predict defects across projects with heterogeneous metric sets. Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance.", "title": "" }, { "docid": "2258a0ba739557d489a796f050fad3e0", "text": "The term fractional calculus is more than 300 years old. It is a generalization of the ordinary differentiation and integration to non-integer (arbitrary) order. The subject is as old as the calculus of differentiation and goes back to times when Leibniz, Gauss, and Newton invented this kind of calculation. In a letter to L’Hospital in 1695 Leibniz raised the following question (Miller and Ross, 1993): “Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders?\" The story goes that L’Hospital was somewhat curious about that question and replied by another question to Leibniz. “What if the order will be 1/2?\" Leibniz in a letter dated September 30, 1695 replied: “It will lead to a paradox, from which one day useful consequences will be drawn.\" The question raised by Leibniz for a fractional derivative was an ongoing topic in the last 300 years. Several mathematicians contributed to this subject over the years. People like Liouville, Riemann, and Weyl made major contributions to the theory of fractional calculus. The story of the fractional calculus continued with contributions from Fourier, Abel, Leibniz, Grünwald, and Letnikov. Nowadays, the fractional calculus attracts many scientists and engineers. There are several applications of this mathematical phenomenon in mechanics, physics, chemistry, control theory and so on (Caponetto et al., 2010; Magin, 2006; Monje et al., 2010; Oldham and Spanier, 1974; Oustaloup, 1995; Podlubny, 1999). It is natural that many authors tried to solve the fractional derivatives, fractional integrals and fractional differential equations in Matlab. A few very good and interesting Matlab functions were already submitted to the MathWorks, Inc. Matlab Central File Exchange, where they are freely downloadable for sharing among the users. In this chapter we will use some of them. It is worth mentioning some addition to Matlab toolboxes, which are appropriate for the solution of fractional calculus problems. One of them is a toolbox created by CRONE team (CRONE, 2010) and another one is the Fractional State–Space Toolkit developed by Dominik Sierociuk (Sierociuk, 2005). Last but not least we should also mention a Matlab toolbox created by Dingyü Xue (Xue, 2010), which is based on Matlab object for fractional-order transfer function and some manipulation with this class of the transfer function. Despite that the mentioned toolboxes are mainly for control systems, they can be “abused\" for solutions of general problems related to fractional calculus as well. 10", "title": "" }, { "docid": "322fd3b0c6c833bac9598b510dc40b98", "text": "Quality assessment is an indispensable technique in a large body of media applications, i.e., photo retargeting, scenery rendering, and video summarization. In this paper, a fully automatic framework is proposed to mimic how humans subjectively perceive media quality. The key is a locality-preserved sparse encoding algorithm that accurately discovers human gaze shifting paths from each image or video clip. In particular, we first extract local image descriptors from each image/video, and subsequently project them into the so-called perceptual space. Then, a nonnegative matrix factorization (NMF) algorithm is proposed that represents each graphlet by a linear and sparse combination of the basis ones. Since each graphlet is visually/semantically similar to its neighbors, a locality-preserved constraint is encoded into the NMF algorithm. Mathematically, the saliency of each graphlet is quantified by the norm of its sparse codes. Afterward, we sequentially link them into a path to simulate human gaze allocation. Finally, a probabilistic quality model is learned based on such paths extracted from a collection of photos/videos, which are marked as high quality ones via multiple Flickr users. Comprehensive experiments have demonstrated that: 1) our quality model outperforms many of its competitors significantly, and 2) the learned paths are on average 89.5% consistent with real human gaze shifting paths.", "title": "" }, { "docid": "013f9499b9a3e1ffdd03aa4de48d233b", "text": "We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a \"sanitization\" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a \"synthetic data set\" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role.\n For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.", "title": "" }, { "docid": "ec4dcce4f53e38909be438beeb62b1df", "text": " A very efficient protocol for plant regeneration from two commercial Humulus lupulus L. (hop) cultivars, Brewers Gold and Nugget has been established, and the morphogenetic potential of explants cultured on Adams modified medium supplemented with several concentrations of cytokinins and auxins studied. Zeatin at 4.56 μm produced direct caulogenesis and caulogenic calli in both cultivars. Subculture of these calli on Adams modified medium supplemented with benzylaminopurine (4.4 μm) and indolebutyric acid (0.49 μm) promoted shoot regeneration which gradually increased up to the third subculture. Regeneration rates of 60 and 29% were achieved for Nugget and Brewers Gold, respectively. By selection of callus lines, it has been possible to maintain caulogenic potential for 14 months. Regenerated plants were successfully transferred to field conditions.", "title": "" }, { "docid": "9c3172266da959ee3cf9e7316bbcba96", "text": "We propose a new research direction for eye-typing which is potentially much faster: dwell-free eye-typing. Dwell-free eye-typing is in principle possible because we can exploit the high redundancy of natural languages to allow users to simply look at or near their desired letters without stopping to dwell on each letter. As a first step we created a system that simulated a perfect recognizer for dwell-free eye-typing. We used this system to investigate how fast users can potentially write using a dwell-free eye-typing interface. We found that after 40 minutes of practice, users reached a mean entry rate of 46 wpm. This indicates that dwell-free eye-typing may be more than twice as fast as the current state-of-the-art methods for writing by gaze. A human performance model further demonstrates that it is highly unlikely traditional eye-typing systems will ever surpass our dwell-free eye-typing performance estimate.", "title": "" }, { "docid": "681aba7f37ae6807824c299454af5721", "text": "Due to their rapid growth and deployment, Internet of things (IoT) devices have become a central aspect of our daily lives. However, they tend to have many vulnerabilities which can be exploited by an attacker. Unsupervised techniques, such as anomaly detection, can help us secure the IoT devices. However, an anomaly detection model must be trained for a long time in order to capture all benign behaviors. This approach is vulnerable to adversarial attacks since all observations are assumed to be benign while training the anomaly detection model. In this paper, we propose CIoTA, a lightweight framework that utilizes the blockchain concept to perform distributed and collaborative anomaly detection for devices with limited resources. CIoTA uses blockchain to incrementally update a trusted anomaly detection model via self-attestation and consensus among IoT devices. We evaluate CIoTA on our own distributed IoT simulation platform, which consists of 48 Raspberry Pis, to demonstrate CIoTA’s ability to enhance the security of each device and the security of the network as a whole.", "title": "" }, { "docid": "7c482427e4f0305c32210093e803eb78", "text": "A healable transparent capacitive touch screen sensor has been fabricated based on a healable silver nanowire-polymer composite electrode. The composite electrode features a layer of silver nanowire percolation network embedded into the surface layer of a polymer substrate comprising an ultrathin soldering polymer layer to confine the nanowires to the surface of a healable Diels-Alder cycloaddition copolymer and to attain low contact resistance between the nanowires. The composite electrode has a figure-of-merit sheet resistance of 18 Ω/sq with 80% transmittance at 550 nm. A surface crack cut on the conductive surface with 18 Ω is healed by heating at 100 °C, and the sheet resistance recovers to 21 Ω in 6 min. A healable touch screen sensor with an array of 8×8 capacitive sensing points is prepared by stacking two composite films patterned with 8 rows and 8 columns of coupling electrodes at 90° angle. After deliberate damage, the coupling electrodes recover touch sensing function upon heating at 80 °C for 30 s. A capacitive touch screen based on Arduino is demonstrated capable of performing quick recovery from malfunction caused by a razor blade cutting. After four cycles of cutting and healing, the sensor array remains functional.", "title": "" }, { "docid": "d8127fc372994baee6fd8632d585a347", "text": "Dynamic query interfaces (DQIs) form a recently developed method of database access that provides continuous realtime feedback to the user during the query formulation process. Previous work shows that DQIs are elegant and powerful interfaces to small databases. Unfortunately, when applied to large databases, previous DQI algorithms slow to a crawl. We present a new approach to DQI algorithms that works well with large databases.", "title": "" } ]
scidocsrr
36fae3951ccc4f729d75eeba37981676
Global Policy Construction in Modular Reinforcement Learning
[ { "docid": "c0d7b92c1b88a2c234eac67c5677dc4d", "text": "To appear in G Tesauro D S Touretzky and T K Leen eds Advances in Neural Information Processing Systems MIT Press Cambridge MA A straightforward approach to the curse of dimensionality in re inforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neu ral net Although this has been successful in the domain of backgam mon there is no guarantee of convergence In this paper we show that the combination of dynamic programming and function approx imation is not robust and in even very benign cases may produce an entirely wrong policy We then introduce Grow Support a new algorithm which is safe from divergence yet can still reap the bene ts of successful generalization", "title": "" } ]
[ { "docid": "b3c203dabe2c19764634fbc3a6717381", "text": "This work complements existing research regarding the forgiveness process by highlighting the role of commitment in motivating forgiveness. On the basis of an interdependence-theoretic analysis, the authors suggest that (a) victims' self-oriented reactions to betrayal are antithetical to forgiveness, favoring impulses such as grudge and vengeance, and (b) forgiveness rests on prorelationship motivation, one cause of which is strong commitment. A priming experiment, a cross-sectional survey study, and an interaction record study revealed evidence of associations (or causal effects) of commitment with forgiveness. The commitment-forgiveness association appeared to rest on intent to persist rather than long-term orientation or psychological attachment. In addition, the commitment-forgiveness association was mediated by cognitive interpretations of betrayal incidents; evidence for mediation by emotional reactions was inconsistent.", "title": "" }, { "docid": "886e88c878bae3c56fc81e392cecd1c9", "text": "This review summarizes data from the numerous investigations from the beginning of the last century to the present. The studies concerned the main issues of the morphology, the life cycle, hosts and localization of Hepatozoon canis (phylum Apicomplexa, suborder Adeleorina, family Hepatozoidae). The characteristic features of hepatozoonosis, caused by Hepatozoon canis in the dog, are evaluated. A survey of clinical signs, gross pathological changes, epidemiology, diagnosis and treatment of the disease was made. The measures for prevention of Hepatozoon canis infection in animals are listed. The importance of hepatozoonosis with regard to public health was evaluated. The studies on the subject, performed in Bulgaria, are discussed.", "title": "" }, { "docid": "1bfc1972a32222a1b5816bb040040374", "text": "BACKGROUND\nSkeletal muscle is key to motor development and represents a major metabolic end organ that aids glycaemic regulation.\n\n\nOBJECTIVES\nTo create gender-specific reference curves for fat-free mass (FFM) and appendicular (limb) skeletal muscle mass (SMMa) in children and adolescents. To examine the muscle-to-fat ratio in relation to body mass index (BMI) for age and gender.\n\n\nMETHODS\nBody composition was measured by segmental bioelectrical impedance (BIA, Tanita BC418) in 1985 Caucasian children aged 5-18.8 years. Skeletal muscle mass data from the four limbs were used to derive smoothed centile curves and the muscle-to-fat ratio.\n\n\nRESULTS\nThe centile curves illustrate the developmental patterns of %FFM and SMMa. While the %FFM curves differ markedly between boys and girls, the SMMa (kg), %SMMa and %SMMa/FFM show some similarities in shape and variance, together with some gender-specific characteristics. Existing BMI curves do not reveal these gender differences. Muscle-to-fat ratio showed a very wide range with means differing between boys and girls and across fifths of BMI z-score.\n\n\nCONCLUSIONS\nBIA assessment of %FFM and SMMa represents a significant advance in nutritional assessment since these body composition components are associated with metabolic health. Muscle-to-fat ratio has the potential to provide a better index of future metabolic health.", "title": "" }, { "docid": "df833f98f7309a5ab5f79fae2f669460", "text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.", "title": "" }, { "docid": "ddff0a3c6ed2dc036cf5d6b93d2da481", "text": "Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predominantly generate temporal event proposals in the forward direction, which neglects future video context. We propose a bidirectional proposal method that effectively exploits both past and future contexts to make proposal predictions. Second, different events ending at (nearly) the same time are indistinguishable in the previous works, resulting in the same captions. We solve this problem by representing each event with an attentive fusion of hidden states from the proposal module and video contents (e.g., C3D features). We further propose a novel context gating mechanism to balance the contributions from the current event and its surrounding contexts dynamically. We empirically show that our attentively fused event representation is superior to the proposal hidden states or video contents alone. By coupling proposal and captioning modules into one unified framework, our model outperforms the state-of-the-arts on the ActivityNet Captions dataset with a relative gain of over 100% (Meteor score increases from 4.82 to 9.65).", "title": "" }, { "docid": "1818105ee444c2837dfdf91884af3109", "text": "Most RE research is conceptual and concentrates on methods or techniques, primarily supporting a single activity. Moreover, the rare field studies we actually have do not establish a link between RE practices and performance. We therefore conducted this study to identify the RE practices that clearly contribute to software project success. Stakeholders and teams Stakeholders are individuals and organizations that are actively involved in a software project or whose interests the project affects. Stakeholders of any computer system can include customers, users, project managers, analysts , developers, senior management, and quality assurance staff. Table 1 illustrates the wide range of expertise and motivations that stakeholders typically exhibit. 2 A typical software project team consists of a project manager, analysts, developers, and quality assurance personnel. Often it includes users or their representatives. In the case of commercial off-the-shelf (COTS) software, marketers such as sales representatives and account managers tend to substitute for users and customers. Field study Seven field studies have reported on RE in practice. 3–9 Unfortunately, these rare studies have not established a clear link to performance and tend to focus on a narrow set of variables. Our study provides a more integrated view of RE by investigating team knowledge, allocated resources, and deployed RE processes (see Figure 1) and their contribution to project success. In addition, we incorporate the observations of previous field studies. Fifteen RE teams, including six COTS and nine customized application develop-Based on their field study of 15 requirements engineering teams, the authors identify the RE practices that clearly contribute to project success, particularly in terms of team knowledge, resource allocation, and process. D eficient requirements are the single biggest cause of software project failure. From studying several hundred organizations, Capers Jones discovered that RE is deficient in more than 75 percent of all enterprises. 1 In other words, getting requirements right might be the single most important and difficult part of a software project. Despite its importance, we know surprisingly little about the actual process of specifying software. \" The RE Process \" sidebar provides a basic description.", "title": "" }, { "docid": "5236f684bc0fdf11855a439c9d3256f6", "text": "The smart home is an environment, where heterogeneous electronic devices and appliances are networked together to provide smart services in a ubiquitous manner to the individuals. As the homes become smarter, more complex, and technology dependent, the need for an adequate security mechanism with minimum individual’s intervention is growing. The recent serious security attacks have shown how the Internet-enabled smart homes can be turned into very dangerous spots for various ill intentions, and thus lead the privacy concerns for the individuals. For instance, an eavesdropper is able to derive the identity of a particular device/appliance via public channels that can be used to infer in the life pattern of an individual within the home area network. This paper proposes an anonymous secure framework (ASF) in connected smart home environments, using solely lightweight operations. The proposed framework in this paper provides efficient authentication and key agreement, and enables devices (identity and data) anonymity and unlinkability. One-time session key progression regularly renews the session key for the smart devices and dilutes the risk of using a compromised session key in the ASF. It is demonstrated that computation complexity of the proposed framework is low as compared with the existing schemes, while security has been significantly improved.", "title": "" }, { "docid": "4f0b32fb335a0a19f431ddc1b7785c05", "text": "Dental implants have proven to be a successful treatment option in fully and partially edentulous patients, rendering long-term functional and esthetic outcomes. Various factors are crucial for predictable long-term peri-implant tissue stability, including the biologic width; the papilla height and the mucosal soft-tissue level; the amounts of soft-tissue volume and keratinized tissue; and the biotype of the mucosa. The biotype of the mucosa is congenitally set, whereas many other parameters can, to some extent, be influenced by the treatment itself. Clinically, the choice of the dental implant and the position in a vertical and horizontal direction can substantially influence the establishment of the biologic width and subsequently the location of the buccal mucosa and the papilla height. Current treatment concepts predominantly focus on providing optimized peri-implant soft-tissue conditions before the start of the prosthetic phase and insertion of the final reconstruction. These include refined surgical techniques and the use of materials from autogenous and xenogenic origins to augment soft-tissue volume and keratinized tissue around dental implants, thereby mimicking the appearance of natural teeth.", "title": "" }, { "docid": "b77bef86667caed885fee95c79dc2292", "text": "In this work, we propose a novel method for vocabulary selection to automatically adapt automatic speech recognition systems to the diverse topics that occur in educational and scientific lectures. Utilizing materials that are available before the lecture begins, such as lecture slides, our proposed framework iteratively searches for related documents on the web and generates a lecture-specific vocabulary based on the resulting documents. In this paper, we propose a novel method for vocabulary selection where we first collect documents similar to an initial seed document and then rank the resulting vocabulary based on a score which is calculated using a combination of word features. This is a critical component for adaptation that has typically been overlooked in prior works. On the inter ACT German-English simultaneous lecture translation system our proposed approach significantly improved vocabulary coverage, reducing the out-of-vocabulary rate, on average by 57.0% and up to 84.9%, compared to a lecture-independent baseline. Furthermore, our approach reduced the word error rate, by 12.5% on average and up to 25.3%, compared to a lecture-independent baseline.", "title": "" }, { "docid": "a0aa33c4afa58bd4dff7eb209bfb7924", "text": "OBJECTIVE\nTo assess whether frequent marijuana use is associated with residual neuropsychological effects.\n\n\nDESIGN\nSingle-blind comparison of regular users vs infrequent users of marijuana.\n\n\nPARTICIPANTS\nTwo samples of college undergraduates: 65 heavy users, who had smoked marijuana a median of 29 days in the last 30 days (range, 22 to 30 days) and who also displayed cannabinoids in their urine, and 64 light users, who had smoked a median of 1 day in the last 30 days (range, 0 to 9 days) and who displayed no urinary cannabinoids.\n\n\nINTERVENTION\nSubjects arrived at 2 PM on day 1 of their study visit, then remained at our center overnight under supervision. Neuropsychological tests were administered to all subjects starting at 9 AM on day 2. Thus, all subjects were abstinent from marijuana and other drugs for a minimum of 19 hours before testing.\n\n\nMAIN OUTCOME MEASURES\nSubjects received a battery of standard neuropsychological tests to assess general intellectual functioning, abstraction ability, sustained attention, verbal fluency, and ability to learn and recall new verbal and visuospatial information.\n\n\nRESULTS\nHeavy users displayed significantly greater impairment than light users on attention/executive functions, as evidenced particularly by greater perseverations on card sorting and reduced learning of word lists. These differences remained after controlling for potential confounding variables, such as estimated levels of premorbid cognitive functioning, and for use of alcohol and other substances in the two groups.\n\n\nCONCLUSIONS\nHeavy marijuana use is associated with residual neuropsychological effects even after a day of supervised abstinence from the drug. However, the question remains open as to whether this impairment is due to a residue of drug in the brain, a withdrawal effect from the drug, or a frank neurotoxic effect of the drug. from marijuana", "title": "" }, { "docid": "a0dc5016dfd424846177e8bb563395d3", "text": "BACKGROUND\nGiven that the prevalence of antenatal and postnatal depression is high, with estimates around 13%, and the consequences serious, efforts have been made to identify risk factors to assist in prevention, identification and treatment. Most risk factors associated with postnatal depression have been well researched, whereas predictors of antenatal depression have been less researched. Risk factors associated with early parenting stress have not been widely researched, despite the strong link with depression. The aim of this study was to further elucidate which of some previously identified risk factors are most predictive of three outcome measures: antenatal depression, postnatal depression and parenting stress and to examine the relationship between them.\n\n\nMETHODS\nPrimipara and multiparae women were recruited antenatally from two major hoitals as part of the beyondblue National Postnatal Depression Program 1. In this subsidiary study, 367 women completed an additional large battery of validated questionnaires to identify risk factors in the antenatal period at 26-32 weeks gestation. A subsample of these women (N = 161) also completed questionnaires at 10-12 weeks postnatally. Depression level was measured by the Beck Depression Inventory (BDI).\n\n\nRESULTS\nRegression analyses identified significant risk factors for the three outcome measures. (1). Significant predictors for antenatal depression: low self-esteem, antenatal anxiety, low social support, negative cognitive style, major life events, low income and history of abuse. (2). Significant predictors for postnatal depression: antenatal depression and a history of depression while also controlling for concurrent parenting stress, which was a significant variable. Antenatal depression was identified as a mediator between seven of the risk factors and postnatal depression. (3). Postnatal depression was the only significant predictor for parenting stress and also acted as a mediator for other risk factors.\n\n\nCONCLUSION\nRisk factor profiles for antenatal depression, postnatal depression and parenting stress differ but are interrelated. Antenatal depression was the strongest predictor of postnatal depression, and in turn postnatal depression was the strongest predictor for parenting stress. These results provide clinical direction suggesting that early identification and treatment of perinatal depression is important.", "title": "" }, { "docid": "bdc9bc09af90bd85f64c79cbca766b61", "text": "The inhalation route is frequently used to administer drugs for the management of respiratory diseases such as asthma or chronic obstructive pulmonary disease. Compared with other routes of administration, inhalation offers a number of advantages in the treatment of these diseases. For example, via inhalation, a drug is directly delivered to the target organ, conferring high pulmonary drug concentrations and low systemic drug concentrations. Therefore, drug inhalation is typically associated with high pulmonary efficacy and minimal systemic side effects. The lung, as a target, represents an organ with a complex structure and multiple pulmonary-specific pharmacokinetic processes, including (1) drug particle/droplet deposition; (2) pulmonary drug dissolution; (3) mucociliary and macrophage clearance; (4) absorption to lung tissue; (5) pulmonary tissue retention and tissue metabolism; and (6) absorptive drug clearance to the systemic perfusion. In this review, we describe these pharmacokinetic processes and explain how they may be influenced by drug-, formulation- and device-, and patient-related factors. Furthermore, we highlight the complex interplay between these processes and describe, using the examples of inhaled albuterol, fluticasone propionate, budesonide, and olodaterol, how various sequential or parallel pulmonary processes should be considered in order to comprehend the pulmonary fate of inhaled drugs.", "title": "" }, { "docid": "d4303828b62c4a03ca69a071d909b0a8", "text": "Despite the increased salience of metaphor in organization theory, current perspectives are flawed and misguided in assuming that metaphor can be explained with the so-called comparison model. I therefore outline an alternative model of metaphor understanding—the domains-interaction model—which suggests that metaphor involves the conjunction of whole semantic domains in which a correspondence between terms or concepts is constructed rather than deciphered and where the resulting image and meaning is creative. I also discuss implications of this model for organizational theorizing and research.", "title": "" }, { "docid": "146402a4b52f16b583e224cbf9a84119", "text": "Many different methods to train deep generative models have been introduced in the past. In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call \"Variational Mixture of Posteriors\" prior, or VampPrior for short. The VampPrior consists of a mixture distribution (e.g., a mixture of Gaussians) with components given by variational posteriors conditioned on learnable pseudo-inputs. We further extend this prior to a two layer hierarchical model and show that this architecture with a coupled prior and posterior, learns significantly better models. The model also avoids the usual local optima issues related to useless latent dimensions that plague VAEs. We provide empirical studies on six datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes, Frey Faces and Histopathology patches, and show that applying the hierarchical VampPrior delivers state-of-the-art results on all datasets in the unsupervised permutation invariant setting and the best results or comparable to SOTA methods for the approach with convolutional networks.", "title": "" }, { "docid": "5f28ea8333b883d9a485d908ef7496b0", "text": "Fueled by the increasing popularity of online social networks, social influence analysis has attracted a great deal of research attention in the past decade. The diffusion process is often modeled using influence graphs, and there has been a line of research that involves algorithmic problems in influence graphs. However, the vast size of today's real-world networks raises a serious issue with regard to computational efficiency.\n In this paper, we propose a new algorithm for reducing influence graphs. Given an input influence graph, the proposed algorithm produces a vertex-weighted influence graph, which is compact and approximates the diffusion properties of the input graph. The central strategy of influence graph reduction is coarsening, which has the potential to greatly reduce the number of edges by merging a vertex set into a single weighted vertex. We provide two implementations; a speed-oriented implementation which runs in linear time with linear space and a scalability-oriented implementation which runs in practically linear time with sublinear space. Further, we present general frameworks using our compact graphs that accelerate existing algorithms for influence maximization and influence estimation problems, which are motivated by practical applications, such as viral marketing. Using these frameworks, we can quickly obtain solutions that have accuracy guarantees under a reasonable assumption. Experiments with real-world networks demonstrate that the proposed algorithm can scale to billion-edge graphs and reduce the graph size to up to 4%. In addition, our influence maximization framework achieves four times speed-up of a state-of-the-art D-SSA algorithm, and our influence estimation framework cuts down the computation time of a simulation-based method to 3.5%.", "title": "" }, { "docid": "00f333b1875e28d6158b793a75fc13a3", "text": "Over the last 20 years, cultural heritage has been a favored domain for personalization research. For years, researchers have experimented with the cutting edge technology of the day; now, with the convergence of internet and wireless technology, and the increasing adoption of the Web as a platform for the publication of information, the visitor is able to exploit cultural heritage material before, during and after the visit, having different goals and requirements in each phase. However, cultural heritage sites have a huge amount of information to present, which must be filtered and personalized in order to enable the individual user to easily access it. Personalization of cultural heritage information requires a system that is able to model the user (e.g., interest, knowledge and other personal characteristics), as well as contextual aspects, select the most appropriate content, and deliver it in the most suitable way. It should be noted that achieving this result is extremely challenging in the case of first-time users, such as tourists who visit a cultural heritage site for the first time (and maybe the only time in their life). In addition, as tourism is a social activity, adapting to the individual is not enough because groups and communities have to be modeled and supported as well, taking into account their mutual interests, previous mutual experience, and requirements. How to model and represent the user(s) and the context of the visit and how to reason with regard to the information that is available are the challenges faced by researchers in personalization of cultural heritage. Notwithstanding the effort invested so far, a definite solution is far from being reached, mainly because new technology and new aspects of personalization are constantly being introduced. This article surveys the research in this area. Starting from the earlier systems, which presented cultural heritage information in kiosks, it summarizes the evolution of personalization techniques in museum web sites, virtual collections and mobile guides, until recent extension of cultural heritage toward the semantic and social web. The paper concludes with current challenges and points out areas where future research is needed.", "title": "" }, { "docid": "e78c9ec9fa263e193b589ec2791ce870", "text": "Firewall is the de facto core technology of today's network security and defense. However, the management of firewall rules has been proven to be complex, error-prone, costly and inefficient for many large-networked organizations. These firewall rules are mostly custom-designed and hand-written thus in constant need for tuning and validation, due to the dynamic nature of the traffic characteristics, ever-changing network environment and its market demands. One of the main problems that we address in this paper is that how much the firewall rules are useful, up-to-dated, well-organized or efficient to reflect the current characteristics of network traffics. In this paper, we present a set of techniques and algorithms to analysis and manage firewall policy rules: (1) data mining technique to deduce efficient firewall policy rules by mining its network traffic log based on its frequency, (2) filtering-rule generalization (FRG) to reduce the number of policy rules by generalization, and (3) a technique to identify any decaying rule and a set of few dominant rules, to generate a new set of efficient firewall policy rules. The anomaly detection based on the mining exposes many hidden but not detectable by analyzing only the firewall policy rules, resulting in two new types of the anomalies. As a result of these mechanisms, network security administrators can automatically review and update the rules. We have developed a prototype system and demonstrated usefulness of our approaches", "title": "" }, { "docid": "f68fda6e081e53302dd3af1a436fec40", "text": "BACKGROUND\nHelicobacter pylori is one of the most prevalent global pathogens and can lead to gastrointestinal disease including peptic ulcers, gastric marginal zone lymphoma and gastric carcinoma.\n\n\nAIM\nTo review recent trends in H. pylori antibiotic resistance rates, and to discuss diagnostics and treatment paradigms.\n\n\nMETHODS\nA PubMed literature search using the following keywords: Helicobacter pylori, antibiotic resistance, clarithromycin, levofloxacin, metronidazole, prevalence, susceptibility testing.\n\n\nRESULTS\nThe prevalence of bacterial antibiotic resistance is regionally variable and appears to be markedly increasing with time in many countries. Concordantly, the antimicrobial eradication rate of H. pylori has been declining globally. In particular, clarithromycin resistance has been rapidly increasing in many countries over the past decade, with rates as high as approximately 30% in Japan and Italy, 50% in China and 40% in Turkey; whereas resistance rates are much lower in Sweden and Taiwan, at approximately 15%; there are limited data in the USA. Other antibiotics show similar trends, although less pronounced.\n\n\nCONCLUSIONS\nSince the choice of empiric therapies should be predicated on accurate information regarding antibiotic resistance rates, there is a critical need for determination of current rates at a local scale, and perhaps in individual patients. Such information would not only guide selection of appropriate empiric antibiotic therapy but also inform the development of better methods to identify H. pylori antibiotic resistance at diagnosis. Patient-specific tailoring of effective antibiotic treatment strategies may lead to reduced treatment failures and less antibiotic resistance.", "title": "" }, { "docid": "ade3f3c778cf29e7c03bf96196916d6d", "text": "Selection and use of pattern recognition algorithms is application dependent. In this work, we explored the use of several ensembles of weak classifiers to classify signals captured from a wearable sensor system to detect food intake based on chewing. Three sensor signals (Piezoelectric sensor, accelerometer, and hand to mouth gesture) were collected from 12 subjects in free-living conditions for 24 hrs. Sensor signals were divided into 10 seconds epochs and for each epoch combination of time and frequency domain features were computed. In this work, we present a comparison of three different ensemble techniques: boosting (AdaBoost), bootstrap aggregation (bagging) and stacking, each trained with 3 different weak classifiers (Decision Trees, Linear Discriminant Analysis (LDA) and Logistic Regression). Type of feature normalization used can also impact the classification results. For each ensemble method, three feature normalization techniques: (no-normalization, z-score normalization, and minmax normalization) were tested. A 12 fold cross-validation scheme was used to evaluate the performance of each model where the performance was evaluated in terms of precision, recall, and accuracy. Best results achieved here show an improvement of about 4% over our previous algorithms.", "title": "" }, { "docid": "e99d7b425ab1a2a9a2de4e10a3fbe766", "text": "In this paper, a review of the authors' work on inkjet-printed flexible antennas, fabricated on paper substrates, is given. This is presented as a system-level solution for ultra-low-cost mass production of UHF radio-frequency identification (RFID) tags and wireless sensor nodes (WSN), in an approach that could be easily extended to other microwave and wireless applications. First, we discuss the benefits of using paper as a substrate for high-frequency applications, reporting its very good electrical/dielectric performance up to at least 1 GHz. The RF characteristics of the paper-based substrate are studied by using a microstrip-ring resonator, in order to characterize the dielectric properties (dielectric constant and loss tangent). We then give details about the inkjet-printing technology, including the characterization of the conductive ink, which consists of nano-silver particles. We highlight the importance of this technology as a fast and simple fabrication technique, especially on flexible organic (e.g., LCP) or paper-based substrates. A compact inkjet-printed UHF ldquopassive RFIDrdquo antenna, using the classic T-match approach and designed to match the IC's complex impedance, is presented as a demonstration prototype for this technology. In addition, we briefly touch upon the state-of-the-art area of fully-integrated wireless sensor modules on paper. We show the first-ever two-dimensional sensor integration with an RFID tag module on paper, as well as the possibility of a three-dimensional multilayer paper-based RF/microwave structure.", "title": "" } ]
scidocsrr