query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
ae84b6d280fa157764e03fd522c4d2d1
|
Understanding adolescence as a period of social–affective engagement and goal flexibility
|
[
{
"docid": "b0b1139c48bbe2286096a7e795d4d0cb",
"text": "This chapter identifies the most robust conclusions and ideas about adolescent development and psychological functioning that have emerged since Petersen's 1988 review. We begin with a discussion of topics that have dominated recent research, including adolescent problem behavior, parent-adolescent relations, puberty, the development of the self, and peer relations. We then identify and examine what seem to us to be the most important new directions that have come to the fore in the last decade, including research on diverse populations, contextual influences on development, behavioral genetics, and siblings. We conclude with a series of recommendations for future research on adolescence.",
"title": ""
}
] |
[
{
"docid": "fcda27822551a75990ac638e8920ab4d",
"text": "Images and videos becomes one of the principle means of communication these days. Validating the authenticity of the image has been the active research area for last decade. When an image or video is obtained as the evidence it can be used as probative only if it is authentic. Convolution Neural Networks (CNN) have been widely used in automatic image classification, Image Recognition and Identifying image Manipulation. CNN is efficient deep neural network that can study concurrently with the help of large datasets. Recent studies have indicated that the architectures of CNN tailored for identifying manipulated image will provide least efficiency when the image is directly fed into the network. Deep Learning is the branch of machine learning that learns the features by hierarchical representation where higher-level features are defined from lower-level concepts. In this paper, we make use of deep learning known as CNN to classify the manipulated image which is capable of automatically learning traces left by editing of the image by applying the filter that retrieves altered relationship among the pixels of the image and experiments were done in TensorFlow framework. Results showed that manipulations like median filtering, Gaussian blurring, resizing and cut and paste forgery can be detected with an average accuracy of 97%.",
"title": ""
},
{
"docid": "1f6b1757282fda5bae06cd0617054642",
"text": "A crucial step toward the goal of automatic extraction of propositionalinformationfrom naturallanguagetext is the identificationof semanticrelations betweenconstituentsin sentences.We examinethe problemof distinguishing amongsevenrelationtypesthatcanoccurbetweentheentities“treatment”and “disease” in biosciencetext, and the problemof identifyingsuchentities.We comparefive generati ve graphicalmodels anda neuralnetwork, usinglexical, syntactic,andsemanticfeatures,finding that thelatterhelpachieve high classificationaccuracy.",
"title": ""
},
{
"docid": "252b8722acd43c9f61a6b10019715392",
"text": "Semantic segmentation is an important step of visual scene understanding for autonomous driving. Recently, Convolutional Neural Network (CNN) based methods have successfully applied in semantic segmentation using narrow-angle or even wide-angle pinhole camera. However, in urban traffic environments, autonomous vehicles need wider field of view to perceive surrounding things and stuff, especially at intersections. This paper describes a CNN-based semantic segmentation solution using fisheye camera which covers a large field of view. To handle the complex scene in the fisheye image, Overlapping Pyramid Pooling (OPP) module is proposed to explore local, global and pyramid local region context information. Based on the OPP module, a network structure called OPP-net is proposed for semantic segmentation. The net is trained and evaluated on a fisheye image dataset for semantic segmentation which is generated from an existing dataset of urban traffic scenes. In addition, zoom augmentation, a novel data augmentation policy specially designed for fisheye image, is proposed to improve the net's generalization performance. Experiments demonstrate the outstanding performance of the OPP-net for urban traffic scenes and the effectiveness of the zoom augmentation.",
"title": ""
},
{
"docid": "1d9031943ab33eb7e715e41d4e953be8",
"text": "The Controller Area Network (CAN) in cars is critical to their safety and performance and is now regarded as being vulnerable to cyberattack. Recent studies have looked at securing the CAN and at intrusion detection methods so that attacks can be quickly identified. The CAN has qualities that distinguish it from other computer networks, while the nature of car production and usage also provide challenges. Thus attack detection methods employed for other networks lack appropriateness for the CAN. This paper surveys the methods that have been investigated for CAN intrusion detection, and considers their implications in terms of practicability and requirements. Consequent developments that will be needed for implementation and research are suggested.",
"title": ""
},
{
"docid": "655194a8e9398d11af167f2fb616a0ad",
"text": "Twitter sentiment analysis or the task of automatically retrieving opinions from tweets has received an increasing interest from the web mining community. This is due to its importance in a wide range of fields such as business and politics. People express sentiments about specific topics or entities with different strengths and intensities, where these sentiments are strongly related to their personal feelings and emotions. A number of methods and lexical resources have been proposed to analyze sentiment from natural language texts, addressing different opinion dimensions. In this article, we propose an approach for boosting Twitter sentiment classification using different sentiment dimensions as meta-level features. We combine aspects such as opinion strength, emotion and polarity indicators, generated by existing sentiment analysis methods and resources. Our research shows that the combination of sentiment dimensions provides significant improvement in Twitter sentiment classification tasks such as polarity and subjectivity.",
"title": ""
},
{
"docid": "314cdc9c802e23b7fde95fa29b4debcb",
"text": "Authorship attribution is a growing field, moving from beginnings in linguistics to recent advances in text mining. Through this change came an increase in the capability of authorship attribution methods both in their accuracy and the ability to consider more difficult problems. Research into authorship attribution in the 19th century considered it difficult to determine the authorship of a document of fewer than 1000 words. By the 1990s this values had decreased to less than 500 words and in the early 21st century it was considered possible to determine the authorship of a document in 250 words. The need for this ever decreasing limit is exemplified by the trend towards many shorter communications rather than fewer longer communications, such as the move from traditional multi-page handwritten letters to shorter, more focused emails. This trend has also been shown in online crime, where many attacks such as phishing or bullying are performed using very concise language. Cybercrime messages have long been hosted on Internet Relay Chats (IRCs) which have allowed members to hide behind screen names and connect anonymously. More recently, Twitter and other short message based web services have been used as a hosting ground for online crimes. This paper presents some evaluations of current techniques and identifies some new preprocessing methods that can be used to enable authorship to be determined at rates significantly better than chance for documents of 140 characters or less, a format popularised by the micro-blogging website Twitter1. We show that the SCAP methodology performs extremely well on twitter messages and even with restrictions on the types of information allowed, such as the recipient of directed messages, still perform significantly higher than chance. Further to this, we show that 120 tweets per user is an important threshold, at which point adding more tweets per user gives a small but non-significant increase in accuracy.",
"title": ""
},
{
"docid": "78db8b57c3221378847092e5283ad754",
"text": "This paper analyzes correlations and causalities between Bitcoin market indicators and Twitter posts containing emotional signals on Bitcoin. Within a timeframe of 104 days (November 23 2013 March 7 2014), about 160,000 Twitter posts containing ”bitcoin” and a positive, negative or uncertainty related term were collected and further analyzed. For instance, the terms ”happy”, ”love”, ”fun”, ”good”, ”bad”, ”sad” and ”unhappy” represent positive and negative emotional signals, while ”hope”, ”fear” and ”worry” are considered as indicators of uncertainty. The static (daily) Pearson correlation results show a significant positive correlation between emotional tweets and the close price, trading volume and intraday price spread of Bitcoin. However, a dynamic Granger causality analysis does not confirm a causal effect of emotional Tweets on Bitcoin market values. To the contrary, the analyzed data shows that a higher Bitcoin trading volume Granger causes more signals of uncertainty within a 24 to 72hour timeframe. This result leads to the interpretation that emotional sentiments rather mirror the market than that they make it predictable. Finally, the conclusion of this paper is that the microblogging platform Twitter is Bitcoins virtual trading floor, emotionally reflecting its trading dynamics.2",
"title": ""
},
{
"docid": "df2c52d659bff75639783332b9bcd571",
"text": "The Alt-Right is a neo-fascist white supremacist movement that is involved in violent extremism and shows signs of engagement in extensive disinformation campaigns. Using social media data mining, this study develops a deeper understanding of such targeted disinformation campaigns and the ways they spread. It also adds to the available literature on the endogenous and exogenous influences within the US far right, as well as motivating factors that drive disinformation campaigns, such as geopolitical strategy. This study is to be taken as a preliminary analysis to indicate future methods and follow-on research that will help develop an integrated approach to understanding the strategies and associations of the modern fascist movement.",
"title": ""
},
{
"docid": "ab8599cbe4b906cea6afab663cbe2caf",
"text": "Real-time ETL and data warehouse multidimensional modeling (DMM) of business operational data has become an important research issue in the area of real-time data warehousing (RTDW). In this study, some of the recently proposed real-time ETL technologies from the perspectives of data volumes, frequency, latency, and mode have been discussed. In addition, we highlight several advantages of using semi-structured DMM (i.e. XML) in RTDW instead of traditional structured DMM (i.e., relational). We compare the two DMMs on the basis of four characteristics: heterogeneous data integration, types of measures supported, aggregate query processing, and incremental maintenance. We implemented the RTDW framework for an example telecommunication organization. Our experimental analysis shows that if the delay comes from the incremental maintenance of DMM, no ETL technology (full-reloading or incremental-loading) can help in real-time business intelligence.",
"title": ""
},
{
"docid": "013ca7d513b658f2dac68644a915b43a",
"text": "Money laundering a suspicious fund transfer between accounts without names which affects and threatens the stability of countries economy. The growth of internet technology and loosely coupled nature of fund transfer gateways helps the malicious user’s to perform money laundering. There are many approaches has been discussed earlier for the detection of money laundering and most of them suffers with identifying the root of money laundering. We propose a time variant approach using behavioral patterns to identify money laundering. In this approach, the transaction logs are split into various time window and for each account specific to the fund transfer the time value is split into different time windows and we generate the behavioral pattern of the user. The behavioral patterns specifies the method of transfer between accounts and the range of amounts and the frequency of destination accounts and etc.. Based on generated behavioral pattern , the malicious transfers and accounts are identified to detect the malicious root account. The proposed approach helps to identify more suspicious accounts and their group accounts to perform money laundering identification. The proposed approach has produced efficient results with less time complexity.",
"title": ""
},
{
"docid": "11d06fb5474df44a6bc733bd5cd1263d",
"text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.",
"title": ""
},
{
"docid": "2550502036aac5cf144cb8a0bc2d525b",
"text": "Significant achievements have been made on the development of next-generation filtration and separation membranes using graphene materials, as graphene-based membranes can afford numerous novel mass-transport properties that are not possible in state-of-art commercial membranes, making them promising in areas such as membrane separation, water desalination, proton conductors, energy storage and conversion, etc. The latest developments on understanding mass transport through graphene-based membranes, including perfect graphene lattice, nanoporous graphene and graphene oxide membranes are reviewed here in relation to their potential applications. A summary and outlook is further provided on the opportunities and challenges in this arising field. The aspects discussed may enable researchers to better understand the mass-transport mechanism and to optimize the synthesis of graphene-based membranes toward large-scale production for a wide range of applications.",
"title": ""
},
{
"docid": "ff5097d34b7c88d6772d18b5a87a71e9",
"text": "While abnormalities in head circumference in autism have been observed for decades, it is only recently that scientists have begun to focus in on the developmental origins of such a phenomenon. In this article we review past and present literature on abnormalities in head circumference, as well as recent developmental MRI studies of brain growth in this disorder. We hypothesize that brain growth abnormalities are greatest in frontal lobes, particularly affecting large neurons such as pyramidal cells, and speculate how this abnormality might affect neurofunctional circuitry in autism. The relationship to clinical characteristics and other disorders of macrencephaly are discussed.",
"title": ""
},
{
"docid": "d7a5eedd87637a266293595a6f2b924f",
"text": "Regular Expression (RE) matching has important applications in the areas of XML content distribution and network security. In this paper, we present the end-to-end design of a high performance RE matching system. Our system combines the processing efficiency of Deterministic Finite Automata (DFA) with the space efficiency of Non-deterministic Finite Automata (NFA) to scale to hundreds of REs. In experiments with real-life RE data on data streams, we found that a bulk of the DFA transitions are concentrated around a few DFA states. We exploit this fact to cache only the frequent core of each DFA in memory as opposed to the entire DFA (which may be exponential in size). Further, we cluster REs such that REs whose interactions cause an exponential increase in the number of states are assigned to separate groups -- this helps to improve cache hits by controlling the overall DFA size.\n To the best of our knowledge, ours is the first end-to-end system capable of matching REs at high speeds and in their full generality. Through a clever combination of RE grouping, and static and dynamic caching, it is able to perform RE matching at high speeds, even in the presence of limited memory. Through experiments with real-life data sets, we show that our RE matching system convincingly outperforms a state-of-the-art Network Intrusion Detection tool with support for efficient RE matching.",
"title": ""
},
{
"docid": "734840224154ef88cdb196671fd3f3f8",
"text": "Tiny face detection aims to find faces with high degrees of variability in scale, resolution and occlusion in cluttered scenes. Due to the very little information available on tiny faces, it is not sufficient to detect them merely based on the information presented inside the tiny bounding boxes or their context. In this paper, we propose to exploit the semantic similarity among all predicted targets in each image to boost current face detectors. To this end, we present a novel framework to model semantic similarity as pairwise constraints within the metric learning scheme, and then refine our predictions with the semantic similarity by utilizing the graph cut techniques. Experiments conducted on three widely-used benchmark datasets have demonstrated the improvement over the-state-of-the-arts gained by applying this idea.",
"title": ""
},
{
"docid": "d5a816dd44d4d95b0d281880f1917831",
"text": "In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.",
"title": ""
},
{
"docid": "1a90c5688663bcb368d61ba7e0d5033f",
"text": "Content-based audio classification and segmentation is a basis for further audio/video analysis. In this paper, we present our work on audio segmentation and classification which employs support vector machines (SVMs). Five audio classes are considered in this paper: silence, music, background sound, pure speech, and non- pure speech which includes speech over music and speech over noise. A sound stream is segmented by classifying each sub-segment into one of these five classes. We have evaluated the performance of SVM on different audio type-pairs classification with testing unit of different- length and compared the performance of SVM, K-Nearest Neighbor (KNN), and Gaussian Mixture Model (GMM). We also evaluated the effectiveness of some new proposed features. Experiments on a database composed of about 4- hour audio data show that the proposed classifier is very efficient on audio classification and segmentation. It also shows the accuracy of the SVM-based method is much better than the method based on KNN and GMM.",
"title": ""
},
{
"docid": "440b90f61bc7826c1165a1f3d306bd5e",
"text": "Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of representation, and search efficiency. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where a high quality of annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automated manner. Reconstructed 3D models obtained by the state-of-the-art retrieval and structure-from-motion methods guide the selection of the training data. We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method to the VGG network achieves state-of-the-art performance on the standard benchmarks: Oxford Buildings, Paris, and Holidays datasets.",
"title": ""
},
{
"docid": "0f699e9f14753b2cbfb7f7a3c7057f40",
"text": "There has been much recent work on training neural attention models at the sequencelevel using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup. We also report new state of the art results on both IWSLT’14 German-English translation as well as Gigaword abstractive summarization. On the large WMT’14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art.1",
"title": ""
}
] |
scidocsrr
|
e27cade05a319d440aa776b3d4f49652
|
Art and brain: insights from neuropsychology, biology and evolution.
|
[
{
"docid": "2f7990443281ed98189abb65a23b0838",
"text": "In recent years, there has been a tendency to correlate the origin of modern culture and language with that of anatomically modern humans. Here we discuss this correlation in the light of results provided by our first hand analysis of ancient and recently discovered relevant archaeological and paleontological material from Africa and Europe. We focus in particular on the evolutionary significance of lithic and bone technology, the emergence of symbolism, Neandertal behavioral patterns, the identification of early mortuary practices, the anatomical evidence for the acquisition of language, the",
"title": ""
},
{
"docid": "a52ac0402ca65a4e7a239c343f79df44",
"text": "How does the brain cause positive affective reactions to sensory pleasure? An answer to pleasure causation requires knowing not only which brain systems are activated by pleasant stimuli, but also which systems actually cause their positive affective properties. This paper focuses on brain causation of behavioral positive affective reactions to pleasant sensations, such as sweet tastes. Its goal is to understand how brain systems generate 'liking,' the core process that underlies sensory pleasure and causes positive affective reactions. Evidence suggests activity in a subcortical network involving portions of the nucleus accumbens shell, ventral pallidum, and brainstem causes 'liking' and positive affective reactions to sweet tastes. Lesions of ventral pallidum also impair normal sensory pleasure. Recent findings regarding this subcortical network's causation of core 'liking' reactions help clarify how the essence of a pleasure gloss gets added to mere sensation. The same subcortical 'liking' network, via connection to brain systems involved in explicit cognitive representations, may also in turn cause conscious experiences of sensory pleasure.",
"title": ""
}
] |
[
{
"docid": "35de54ee9d3d4c117cf4c1d8fc4f4e87",
"text": "On the purpose of managing process models to make them more practical and effective in enterprises, a construction of BPMN-based Business Process Model Base is proposed. Considering Business Process Modeling Notation (BPMN) is used as a standard of process modeling, based on BPMN, the process model transformation is given, and business blueprint modularization management methodology is used for process management. Therefore, BPMN-based Business Process Model Base provides a solution of business process modeling standardization, management and execution so as to enhance the business process reuse.",
"title": ""
},
{
"docid": "29df7892b16864cb3721a05886bbcc82",
"text": "With the rapid growth of the cyber attacks, sharing of cyber threat intelligence (CTI) becomes essential to identify and respond to cyber attack in timely and cost-effective manner. However, with the lack of standard languages and automated analytics of cyber threat information, analyzing complex and unstructured text of CTI reports is extremely time- and labor-consuming. Without addressing this challenge, CTI sharing will be highly impractical, and attack uncertainty and time-to-defend will continue to increase.\n Considering the high volume and speed of CTI sharing, our aim in this paper is to develop automated and context-aware analytics of cyber threat intelligence to accurately learn attack pattern (TTPs) from commonly available CTI sources in order to timely implement cyber defense actions. Our paper has three key contributions. First, it presents a novel threat-action ontology that is sufficiently rich to understand the specifications and context of malicious actions. Second, we developed a novel text mining approach that combines enhanced techniques of Natural Language Processing (NLP) and Information retrieval (IR) to extract threat actions based on semantic (rather than syntactic) relationship. Third, our CTI analysis can construct a complete attack pattern by mapping each threat action to the appropriate techniques, tactics and kill chain phases, and translating it any threat sharing standards, such as STIX 2.1. Our CTI analytic techniques were implemented in a tool, called TTPDrill, and evaluated using a randomly selected set of Symantec Threat Reports. Our evaluation tests show that TTPDrill achieves more than 82% of precision and recall in a variety of measures, very reasonable for this problem domain.",
"title": ""
},
{
"docid": "a1b3616da2faad8093c44fb7dfce6974",
"text": "In this paper, a multiobjective optimization approach for designing a Manipulator Robot by simultaneously considering the mechanism, the controller and the servo drive subsystems is proposed. The integrated design problem is considered as a nonlinear multiobjective dynamic optimization problem, which relates the structural parameters, the robot controller and the selection of the ratio gear-motor from an industry catalog. A three dof manipulator robot and its controller are designed, where the performance design objectives are tracking error, manipulability measure and energy consumption.",
"title": ""
},
{
"docid": "ba55729b62e2232064f070460f48d552",
"text": "A striking difference between brain-inspired neuromorphic processors and current von Neumann processor architectures is the way in which memory and processing is organized. As information and communication technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper, we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multineuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.",
"title": ""
},
{
"docid": "18ec8152701d1d049d458d1d71dbb334",
"text": "The present study compared neural connectivity and the level of phasic synchronization between neural populations in patients with Internet gaming disorder (IGD), patients with alcohol use disorder (AUD), and healthy controls (HCs) using resting-state electroencephalography (EEG) coherence analyses. For this study, 92 adult males were categorized into three groups: IGD (n = 30), AUD (n = 30), and HC (n = 32). The IGD group exhibited increased intrahemispheric gamma (30–40 Hz) coherence compared to the AUD and HC groups regardless of psychological features (e.g., depression, anxiety, and impulsivity) and right fronto-central gamma coherence positively predicted the scores of the Internet addiction test in all groups. In contrast, the AUD group showed marginal tendency of increased intrahemispheric theta (4–8 Hz) coherence relative to the HC group and this was dependent on the psychological features. The present findings indicate that patients with IGD and AUD exhibit different neurophysiological patterns of brain connectivity and that an increase in the fast phasic synchrony of gamma coherence might be a core neurophysiological feature of IGD.",
"title": ""
},
{
"docid": "4c21ec3a600d773ea16ce6c45df8fe9d",
"text": "The efficacy of particle identification is compared using artificial neutral networks and boosted decision trees. The comparison is performed in the context of the MiniBooNE, an experiment at Fermilab searching for neutrino oscillations. Based on studies of Monte Carlo samples of simulated data, particle identification with boosting algorithms has better performance than that with artificial neural networks for the MiniBooNE experiment. Although the tests in this paper were for one experiment, it is expected that boosting algorithms will find wide application in physics. r 2005 Elsevier B.V. All rights reserved. PACS: 29.85.+c; 02.70.Uu; 07.05.Mh; 14.60.Pq",
"title": ""
},
{
"docid": "499a37563d171054ad0b0d6b8f7007bf",
"text": "For cold-start recommendation, it is important to rapidly profile new users and generate a good initial set of recommendations through an interview process --- users should be queried adaptively in a sequential fashion, and multiple items should be offered for opinion solicitation at each trial. In this work, we propose a novel algorithm that learns to conduct the interview process guided by a decision tree with multiple questions at each split. The splits, represented as sparse weight vectors, are learned through an L_1-constrained optimization framework. The users are directed to child nodes according to the inner product of their responses and the corresponding weight vector. More importantly, to account for the variety of responses coming to a node, a linear regressor is learned within each node using all the previously obtained answers as input to predict item ratings. A user study, preliminary but first in its kind in cold-start recommendation, is conducted to explore the efficient number and format of questions being asked in a recommendation survey to minimize user cognitive efforts. Quantitative experimental validations also show that the proposed algorithm outperforms state-of-the-art approaches in terms of both the prediction accuracy and user cognitive efforts.",
"title": ""
},
{
"docid": "af8ddd6792a98ea3b59bdaab7c7fa045",
"text": "This research explores the alternative media ecosystem through a Twitter lens. Over a ten-month period, we collected tweets related to alternative narratives—e.g. conspiracy theories—of mass shooting events. We utilized tweeted URLs to generate a domain network, connecting domains shared by the same user, then conducted qualitative analysis to understand the nature of different domains and how they connect to each other. Our findings demonstrate how alternative news sites propagate and shape alternative narratives, while mainstream media deny them. We explain how political leanings of alternative news sites do not align well with a U.S. left-right spectrum, but instead feature an antiglobalist (vs. globalist) orientation where U.S. Alt-Right sites look similar to U.S. Alt-Left sites. Our findings describe a subsection of the emerging alternative media ecosystem and provide insight in how websites that promote conspiracy theories and pseudo-science may function to conduct underlying political agendas.",
"title": ""
},
{
"docid": "73d9461101dc15f93f52d2ab9b8c0f39",
"text": "The need for mining structured data has increased in the past few years. One of the best studied data structures in computer science and discrete mathematics are graphs. It can therefore be no surprise that graph based data mining has become quite popular in the last few years.This article introduces the theoretical basis of graph based data mining and surveys the state of the art of graph-based data mining. Brief descriptions of some representative approaches are provided as well.",
"title": ""
},
{
"docid": "14f59c6eb8f1c8518f74bc14a1d89fa6",
"text": "Compression artifacts reduction (CAR) is a challenging problem in the field of remote sensing. Most recent deep learning based methods have demonstrated superior performance over the previous hand-crafted methods. In this paper, we propose an end-to-end one-two-one (OTO) network, to combine different deep models, i.e., summation and difference models, to solve the CAR problem. Particularly, the difference model motivated by the Laplacian pyramid is designed to obtain the high frequency information, while the summation model aggregates the low frequency information. We provide an in-depth investigation into our OTO architecture based on the Taylor expansion, which shows that these two kinds of information can be fused in a nonlinear scheme to gain more capacity of handling complicated image compression artifacts, especially the blocking effect in compression. Extensive experiments are conducted to demonstrate the superior performance of the OTO networks, as compared to the state-of-the-arts on remote sensing datasets and other benchmark datasets. The source code will be available here.",
"title": ""
},
{
"docid": "98d8822a658dc7ecdfb7cb824c73e7a5",
"text": "We address the problem of generating query suggestions to support users in completing their underlying tasks (which motivated them to search in the first place). Given an initial query, these query suggestions should provide a coverage of possible subtasks the user might be looking for. We propose a probabilistic modeling framework that obtains keyphrases from multiple sources and generates query suggestions from these keyphrases. Using the test suites of the TREC Tasks track, we evaluate and analyze each component of our model.",
"title": ""
},
{
"docid": "050df7cb6f2d633814489fed6859fc3e",
"text": "In this article, we analyze institutionalization as a process of transferring and stabilizing material artifacts and routines in the form of enterprise resource planning (ERP) systems. Although past studies have analyzed institutionalization as structuring around scripts or discourse moves, we emphasize the material role of artifacts and routines as carriers of institutional logics. In addition, insitutionalization is not linear and incremental, but goes through sudden, nonlinear disruptions. To this end, we apply punctuated socio-technical information system change (PSIC) model that draws upon Gersick’s model of change to identify and trace moves that are critical during the institutionalization. The model accounts for ERP institutionalization by chronicling complex interactions between socio-technical elements in the implementation system, the work system, and organizational and environmental context which together account for the institutionalization outcome. We use the model to analyze a longitudinal case covering 11 years (1993–2004) of ERP implementation processes in a large Saudi steel firm. Our analysis shows that the proposed material and punctuated lens toward institutionalization offers rich insights how and why ERP systems become institutions and why their institutionalization is difficult and unfolds in unpredictable ways. We conclude that the normally held assumptions of successful linear and incremental adaptation to new institutional patterns logics out by ERP systems do not hold. Journal of Information Technology (2009) 24, 286–304. doi:10.1057/jit.2009.14",
"title": ""
},
{
"docid": "f6fa1c4ce34f627d9d7d1ca702272e26",
"text": "One of the most difficult aspects in rhinoplasty is resolving and preventing functional compromise of the nasal valve area reliably. The nasal valves are crucial for the individual breathing competence of the nose. Structural and functional elements contribute to this complex system: the nasolabial angle, the configuration and stability of the alae, the function of the internal nasal valve, the anterior septum symmetrically separating the bilateral airways and giving structural and functional support to the alar cartilage complex and to their junction with the upper lateral cartilages, the scroll area. Subsequently, the open angle between septum and sidewalls is important for sufficient airflow as well as the position and function of the head of the turbinates. The clinical examination of these elements is described. Surgical techniques are more or less well known and demonstrated with patient examples and drawings: anterior septoplasty, reconstruction of tip and dorsum support by septal extension grafts and septal replacement, tip suspension and lateral crural sliding technique, spreader grafts and suture techniques, splay grafts, alar batten grafts, lateral crural extension grafts, and lateral alar suspension. The numerous literature is reviewed.",
"title": ""
},
{
"docid": "748926afd2efcae529a58fbfa3996884",
"text": "The purpose of this research was to investigate preservice teachers’ perceptions about using m-phones and laptops in education as mobile learning tools. A total of 1087 preservice teachers participated in the study. The results indicated that preservice teachers perceived laptops potentially stronger than m-phones as m-learning tools. In terms of limitations the situation was balanced for laptops and m-phones. Generally, the attitudes towards using laptops in education were not exceedingly positive but significantly more positive than m-phones. It was also found that such variables as program/department, grade, gender and possessing a laptop are neutral in causing a practically significant difference in preservice teachers’ views. The results imply an urgent need to grow awareness among participating student teachers towards the concept of m-learning, especially m-learning through m-phones. Introduction The world is becoming a mobigital virtual space where people can learn and teach digitally anywhere and anytime. Today, when timely access to information is vital, mobile devices such as cellular phones, smartphones, mp3 and mp4 players, iPods, digital cameras, data-travelers, personal digital assistance devices (PDAs), netbooks, laptops, tablets, iPads, e-readers such as the Kindle, Nook, etc have spread very rapidly and become common (El-Hussein & Cronje, 2010; Franklin, 2011; Kalinic, Arsovski, Stefanovic, Arsovski & Rankovic, 2011). Mobile devices are especially very popular among young population (Kalinic et al, 2011), particularly among university students (Cheon, Lee, Crooks & Song, 2012; Park, Nam & Cha, 2012). Thus, the idea of learning through mobile devices has gradually become a trend in the field of digital learning (Jeng, Wu, Huang, Tan & Yang, 2010). This is because learning with mobile devices promises “new opportunities and could improve the learning process” (Kalinic et al, 2011, p. 1345) and learning with mobile devices can help achieving educational goals if used through appropriate learning strategies (Jeng et al, 2010). As a matter of fact, from a technological point of view, mobile devices are getting more capable of performing all of the functions necessary in learning design (El-Hussein & Cronje, 2010). This and similar ideas have brought about the concept of mobile learning or m-learning. British Journal of Educational Technology Vol 45 No 4 2014 606–618 doi:10.1111/bjet.12064 © 2013 British Educational Research Association Although mobile learning applications are at their early days, there inevitably emerges a natural pressure by students on educators to integrate m-learning (Franklin, 2011) and so a great deal of attention has been drawn in these applications in the USA, Europe and Asia (Wang & Shen, 2012). Several universities including University of Glasgow, University of Sussex and University of Regensburg have been trying to explore and include the concept of m-learning in their learning systems (Kalinic et al, 2011). Yet, the success of m-learning integration requires some degree of awareness and positive attitudes by students towards m-learning. In this respect, in-service or preservice teachers’ perceptions about m-learning become more of an issue, since their attitudes are decisive in successful integration of m-learning (Cheon et al, 2012). Then it becomes critical whether the teachers, in-service or preservice, have favorable perceptions and attitudinal representations regarding m-learning. Theoretical framework M-learning M-learning has a recent history. When developed as the next phase of e-learning in early 2000s (Peng, Su, Chou & Tsai, 2009), its potential for education could not be envisaged (Attewell, 2005). However, recent developments in mobile and wireless technologies facilitated the departure from traditional learning models with time and space constraints, replacing them with Practitioner Notes What is already known about this topic • Mobile devices are very popular among young population, especially among university students. • Though it has a recent history, m-learning (ie, learning through mobile devices) has gradually become a trend. • M-learning brings new opportunities and can improve the learning process. Previous research on m-learning mostly presents positive outcomes in general besides some drawbacks. • The success of integrating m-learning in teaching practice requires some degree of awareness and positive attitudes by students towards m-learning. What this paper adds • Since teachers’ attitudes are decisive in successful integration of m-learning in teaching, the present paper attempts to understand whether preservice teachers have favorable perceptions and attitudes regarding m-learning. • Unlike much of the previous research on m-learning that handle perceptions about m-learning in a general sense, the present paper takes a more specific approach to distinguish and compare the perceptions about two most common m-learning tools: m-phones and laptops. • It also attempts to find out the variables that cause differences in preservice teachers’ perceptions about using these m-learning devices. Implications for practice and/or policy • Results imply an urgent need to grow awareness and further positive attitudes among participating student teachers towards m-learning, especially through m-phones. • Some action should be taken by the faculty and administration to pedagogically inform and raise awareness about m-learning among preservice teachers. Preservice teachers’ perceptions of M-learning tools 607 © 2013 British Educational Research Association models embedded into our everyday environment, and the paradigm of mobile learning emerged (Vavoula & Karagiannidis, 2005). Today it spreads rapidly and promises to be one of the efficient ways of education (El-Hussein & Cronje, 2010). Partly because it is a new concept, there is no common definition of m-learning in the literature yet (Peng et al, 2009). A good deal of literature defines m-learning as a derivation or extension of e-learning, which is performed using mobile devices such as PDA, mobile phones, laptops, etc (Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Riad & El-Ghareeb, 2008). Other definitions highlight certain characteristics of m-learning including portability through mobile devices, wireless Internet connection and ubiquity. For example, a common definition of m-learning in scholarly literature is “the use of portable devices with Internet connection capability in education contexts” (Kinash, Brand & Mathew, 2012, p. 639). In a similar vein, Park et al (2012, p. 592) defines m-learning as “any educational provision where the sole or dominant technologies are handheld or palmtop devices.” On the other hand, m-learning is likely to be simply defined stressing its property of ubiquity, referring to its ability to happen whenever and wherever needed (Peng et al, 2009). For example, Franklin (2011, p. 261) defines mobile learning as “learning that happens anywhere, anytime.” Though it is rather a new research topic and the effectiveness of m-learning in terms of learning achievements has not been fully investigated (Park et al, 2012), there is already an agreement that m-learning brings new opportunities and can improve the learning process (Kalinic et al, 2011). Moreover, the literature review by Wu et al (2012) notes that 86% of the 164 mobile learning studies present positive outcomes in general. Several perspectives of m-learning are attributed in the literature in association with these positive outcomes. The most outstanding among them is the feature of mobility. M-learning makes sense as an educational activity because the technology and its users are mobile (El-Hussein & Cronje, 2010). Hence, learning outside the classroom walls is possible (Nordin, Embi & Yunus, 2010; Şad, 2008; Saran, Seferoğlu & Çağıltay, 2009), enabling students to become an active participant, rather than a passive receiver of knowledge (Looi et al, 2010). This unique feature of m-learning brings about not only the possibility of learning anywhere without limits of classroom or library but also anytime (Çavuş & İbrahim, 2009; Hwang & Chang, 2011; Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Sha, Looi, Chen & Zhang, 2012; Sølvberg & Rismark, 2012). This especially offers learners a certain amount of “freedom and independence” (El-Hussein & Cronje, 2010, p. 19), as well as motivation and ability to “self-regulate their own learning” (Sha et al, 2012, p. 366). This idea of learning coincides with the principles of and meet the requirements of other popular paradigms in education including lifelong learning (Nordin et al, 2010), student-centeredness (Sha et al, 2012) and constructivism (Motiwalla, 2007). Beside the favorable properties referred in the m-learning literature, some drawbacks of m-learning are frequently criticized. The most pronounced one is the small screen sizes of the m-learning tools that makes learning activity difficult (El-Hussein & Cronje, 2010; Kalinic et al, 2011; Riad & El-Ghareeb, 2008; Suki & Suki, 2011). Another problem is the weight and limited battery lives of m-tools, particularly the laptops (Riad & El-Ghareeb, 2008). Lack of understanding or expertise with the technology also hinders nontechnical students’ active use of m-learning (Corbeil & Valdes-Corbeil, 2007; Franklin, 2011). Using mobile devices in classroom can cause distractions and interruptions (Cheon et al, 2012; Fried, 2008; Suki & Suki, 2011). Another concern seems to be about the challenged role of the teacher as the most learning activities take place outside the classroom (Sølvberg & Rismark, 2012). M-learning in higher education Mobile learning is becoming an increasingly promising way of delivering instruction in higher education (El-Hussein & Cronje, 2010). This is justified by the current statistics about the 608 British Journal of Educational Technology Vol 45 No 4 2014 © 2013 British Education",
"title": ""
},
{
"docid": "26bf14bbb9aa8336a25fc27045ea0a34",
"text": "Lithium-ion batteries raise safety, environmental, and cost concerns, which mostly arise from their nonaqueous electrolytes. The use of aqueous alternatives is limited by their narrow electrochemical stability window (1.23 volts), which sets an intrinsic limit on the practical voltage and energy output. We report a highly concentrated aqueous electrolyte whose window was expanded to ~3.0 volts with the formation of an electrode-electrolyte interphase. A full lithium-ion battery of 2.3 volts using such an aqueous electrolyte was demonstrated to cycle up to 1000 times, with nearly 100% coulombic efficiency at both low (0.15 C) and high (4.5 C) discharge and charge rates.",
"title": ""
},
{
"docid": "613b014ea02019a78be488a302ff4794",
"text": "In this study, the robustness of approaches to the automatic classification of emotions in speech is addressed. Among the many types of emotions that exist, two groups of emotions are considered, adult-to-adult acted vocal expressions of common types of emotions like happiness, sadness, and anger and adult-to-infant vocal expressions of affective intents also known as ‘‘motherese’’. Specifically, we estimate the generalization capability of two feature extraction approaches, the approach developed for Sony’s robotic dog AIBO (AIBO) and the segment-based approach (SBA) of [Shami, M., Kamel, M., 2005. Segment-based approach to the recognition of emotions in speech. In: IEEE Conf. on Multimedia and Expo (ICME05), Amsterdam, The Netherlands]. Three machine learning approaches are considered, K-nearest neighbors (KNN), Support vector machines (SVM) and Ada-boosted decision trees and four emotional speech databases are employed, Kismet, BabyEars, Danish, and Berlin databases. Single corpus experiments show that the considered feature extraction approaches AIBO and SBA are competitive on the four databases considered and that their performance is comparable with previously published results on the same databases. The best choice of machine learning algorithm seems to depend on the feature extraction approach considered. Multi-corpus experiments are performed with the Kismet–BabyEars and the Danish–Berlin database pairs that contain parallel emotional classes. Automatic clustering of the emotional classes in the database pairs shows that the patterns behind the emotions in the Kismet–BabyEars pair are less database dependent than the patterns in the Danish–Berlin pair. In off-corpus testing the classifier is trained on one database of a pair and tested on the other. This provides little improvement over baseline classification. In integrated corpus testing, however, the classifier is machine learned on the merged databases and this gives promisingly robust classification results, which suggest that emotional corpora with parallel emotion classes recorded under different conditions can be used to construct a single classifier capable of distinguishing the emotions in the merged corpora. Such a classifier is more robust than a classifier learned on a single corpus as it can recognize more varied expressions of the same emotional classes. These findings suggest that the existing approaches for the classification of emotions in speech are efficient enough to handle larger amounts of training data without any reduction in classification accuracy. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9fd247bb0f45d09e11c05fca48372ee8",
"text": "Based on the CSMC 0.6um 40V BCD process and the bandgap principle a reference circuit used in high voltage chip is designed. The simulation results show that a temperature coefficient of 26.5ppm/°C in the range of 3.5∼40V supply, the output voltage is insensitive to the power supply, when the supply voltage rages from 3.5∼40V, the output voltage is equal to 1.2558V to 1.2573V at room temperature. The circuit we designed has high precision and stability, thus it can be used as stability reference voltage in power management IC.",
"title": ""
},
{
"docid": "784720919b860d9f0606d65036ef8297",
"text": "Conventional word embedding models do not leverage information from document metadata, and they do not model uncertainty. We address these concerns with a model that incorporates document covariates to estimate conditional word embedding distributions. Our model allows for (a) hypothesis tests about the meanings of terms, (b) assessments as to whether a word is near or far from another conditioned on different covariate values, and (c) assessments as to whether estimated differences are statistically significant.",
"title": ""
},
{
"docid": "80dc23815c60dd7f86007d093e1f8c7a",
"text": "Cryptocurrency, a form of digital currency that has an open and decentralized system and uses cryptography to enhance security and control the creation of new units, is touted to be the next step from conventional monetary transactions. Many cryptocurrencies exist today, with Bitcoin being the most prominent of them. Cryptocurrencies are generated by mining, as a fee for validating any transaction. The rate of generating hashes, which validate any transaction, has been increased by the use of specialized machines such as FPGAs and ASICs, running complex hashing algorithms like SHA-256 and Scrypt, thereby leading to faster generation of cryptocurrencies. This arms race for cheaper-yet-efficient machines has been on since the day the first cryptocurrency, Bitcoin, was introduced in 2009. However, with more people venturing into the world of virtual currency, generating hashes for this validation has become far more complex over the years, with miners having to invest huge sums of money on employing multiple high performance ASICs. Thus the value of the currency obtained for finding a hash did not justify the amount of money spent on setting up the machines, the cooling facilities to overcome the enormous amount of heat they produce and electricity required to run them. The next logical step in this is to utilize the power of cloud computing. Miners leasing super computers that generate hashes at astonishing rates that have a high probability of profits, with the same machine being leased to more than one person on a time bound basis is a win-win situation to both the miners, as well as the cloud service providers. This paper throws light on the nuances of cryptocurrency mining process, the traditional machines used for mining, their limitations, about how cloud based mining is the logical next step and the advantage that cloud platform offers over the traditional machines. Keywords—Cryptocurrency; Bitcoin mining; Cloud mining; Double Spending; Profitability",
"title": ""
},
{
"docid": "f331337a19cff2cf29e89a87d7ab234f",
"text": "This paper presents an investigation of lexical chaining (Morris and Hirst, 1991) for measuring discourse coherence quality in test-taker essays. We hypothesize that attributes of lexical chains, as well as interactions between lexical chains and explicit discourse elements, can be harnessed for representing coherence. Our experiments reveal that performance achieved by our new lexical chain features is better than that of previous discourse features used for this task, and that the best system performance is achieved when combining lexical chaining features with complementary discourse features, such as those provided by a discourse parser based on rhetorical structure theory, and features that reflect errors in grammar, word usage, and mechanics.",
"title": ""
}
] |
scidocsrr
|
898e5eea174ad3804fed1f09e0dc820a
|
The Critical Importance of Retrieval--and Spacing--for Learning.
|
[
{
"docid": "aca43e4fa3ad889aca212783a0984454",
"text": "Two studies examined undergraduates' metacognitive awareness of six empirically-supported learning strategies. Study 1 results overall suggested an inability to predict the learning outcomes of educational scenarios describing the strategies of dual-coding, static-media presentations, low-interest extraneous details, testing, and spacing; there was, however, weak endorsement of the strategy of generating one's own study materials. In addition, an independent measure of metacognitive self-regulation was correlated with scenario performance. Study 2 demonstrated higher prediction accuracy for students who had received targeted instruction on applied memory topics in their psychology courses, and the best performance for those students directly exposed to the original empirical studies from which the scenarios were derived. In sum, this research suggests that undergraduates are largely unaware of several specific strategies that could benefit memory for course information; further, training in applied learning and memory topics has the potential to improve metacognitive judgments in these domains.",
"title": ""
},
{
"docid": "876bbee05b7838f4de218b424d895887",
"text": "Although it is commonplace to assume that the type or level of processing during the input of a verbal item determines the representation of that item in memory, which in turn influences later attempts to store, recognize, or recall that item or similar items, it is much less common to assume that the way in which an item is retrieved from memory is also a potent determiner of that item's subsequent representation in memory. Retrieval from memory is often assumed, implicitly or explicitly, as a process analogous to the way in which the contents of a memory location in a computer are read out, that is, as a process that does not, by itself, modify the state of the retrieved item in memory. In my opinion, however, there is ample evidence for a kind of Heisenberg principle with respect to retrieval processes: an item can seldom, if ever, be retrieved from memory without modifying the representation of that item in memory in significant ways. It is both appropriate and productive, I think, to analyze retrieval processes within the same kind of levels-of-processing framework formulated by Craik and Lockhart ( 1972) with respect to input processes; this chapter is an attempt to do so. In the first of the two main sections below, I explore the extent to which negative-recency phenomena in the long-term recall of a list of items is attributable to differences in levels of retrieval during initial recall. In the second section I present some recent results from ex-",
"title": ""
},
{
"docid": "3d3e728e5587fe9fd686fca09a6a06f4",
"text": "Knowing how to manage one's own learning has become increasingly important in recent years, as both the need and the opportunities for individuals to learn on their own outside of formal classroom settings have grown. During that same period, however, research on learning, memory, and metacognitive processes has provided evidence that people often have a faulty mental model of how they learn and remember, making them prone to both misassessing and mismanaging their own learning. After a discussion of what learners need to understand in order to become effective stewards of their own learning, we first review research on what people believe about how they learn and then review research on how people's ongoing assessments of their own learning are influenced by current performance and the subjective sense of fluency. We conclude with a discussion of societal assumptions and attitudes that can be counterproductive in terms of individuals becoming maximally effective learners.",
"title": ""
},
{
"docid": "db28ae27e5c88f995c61d94f3bfcc4da",
"text": "Testing in school is usually done for purposes of assessment, to assign students grades (from tests in classrooms) or rank them in terms of abilities (in standardized tests). Yet tests can serve other purposes in educational settings that greatly improve performance; this chapter reviews 10 other benefits of testing. Retrieval practice occurring during tests can greatly enhance retention of the retrieved information (relative to no testing or even to restudying). Furthermore, besides its durability, such repeated retrieval produces knowledge that can be retrieved flexibly and transferred to other situations. On open-ended assessments (such as essay tests), retrieval practice required by tests can help students organize information and form a coherent knowledge base. Retrieval of some information on a test can also lead to easier retrieval of related information, at least on PsychologyofLearningandMotivation, Volume 55 # 2011 Elsevier Inc. ISSN 0079-7421, DOI 10.1016/B978-0-12-387691-1.00001-6 All rights reserved.",
"title": ""
},
{
"docid": "2490ad05628f62881e16338914135d17",
"text": "The authors examined the hypothesis that judgments of learning (JOL), if governed by processing fluency during encoding, should be insensitive to the anticipated retention interval. Indeed, neither item-by-item nor aggregate JOLs exhibited \"forgetting\" unless participants were asked to estimate recall rates for several different retention intervals, in which case their estimates mimicked closely actual recall rates. These results and others reported suggest that participants can access their knowledge about forgetting but only when theory-based predictions are made, and then only when the notion of forgetting is accentuated either by manipulating retention interval within individuals or by framing recall predictions in terms of forgetting rather than remembering. The authors interpret their findings in terms of the distinction between experience-based and theory-based JOLs.",
"title": ""
},
{
"docid": "2c853123a29d27c3713c8159d13c3728",
"text": "Retrieval practice is a potent technique for enhancing learning, but how often do students practice retrieval when they regulate their own learning? In 4 experiments the subjects learned foreign-language items across multiple study and test periods. When items were assigned to be repeatedly tested, repeatedly studied, or removed after they were recalled, repeated retrieval produced powerful effects on learning and retention. However, when subjects were given control over their own learning and could choose to test, study, or remove items, many subjects chose to remove items rather than practice retrieval, leading to poor retention. In addition, when tests were inserted in the learning phase, attempting retrieval improved learning by enhancing subsequent encoding during study. But when students were given control over their learning they did not attempt retrieval as early or as often as they should to promote the best learning. The experiments identify a compelling metacognitive illusion that occurs during self-regulated learning: Once students can recall an item they tend to believe they have \"learned\" it. This leads students to terminate practice rather than practice retrieval, a strategy choice that ultimately results in poor retention.",
"title": ""
}
] |
[
{
"docid": "72f9891b711ebc261fc081a0b356c31b",
"text": "This paper presents a flat, high gain, wide scanning, broadband continuous transverse stub (CTS) array. The design procedure, the fabrication, and an exhaustive antenna characterization are described in details. The array comprises 16 radiating slots and is fed by a corporate-feed network in hollow parallel plate waveguide (PPW) technology. A pillbox-based linear source illuminates the corporate network and allows for beam steering. The antenna is designed by using an ad hoc mode matching code recently developed for CTS arrays, providing design guidelines. The assembly technique ensures the electrical contact among the various stages of the network without using any electromagnetic choke and any bonding process. The main beam of the antenna is mechanically steered over ±40° in elevation, by moving a compact horn within the focal plane of the pillbox feeding system. Excellent performances are achieved. The features of the beam are stable within the design 27.5-31 GHz band and beyond, in the entire Ka-band (26.5-40 GHz). An antenna gain of about 29 dBi is measured at broadside at 29.25 GHz and scan losses lower than 2 dB are reported at ±40°. The antenna efficiency exceeds 80% in the whole scan range. The very good agreement between measurements and simulations validates the design procedure. The proposed design is suitable for Satcom Ka-band terminals in moving platforms, e.g., trains and planes, and also for mobile ground stations, as a multibeam sectorial antenna.",
"title": ""
},
{
"docid": "8e654ace264f8062caee76b0a306738c",
"text": "We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.",
"title": ""
},
{
"docid": "bb603491b2adbf26f1663a8567362ae1",
"text": "Nurses in an Armed Force Hospital (AFH) expose to stronger stress than those in a civil hospital, especially in an emergency department (ED). Ironically, stresses of these nurses received few if any attention in academic research in the past. This study collects 227 samples from the emergency departments of four armed force hospitals in central and southern Taiwan. The research indicates that the top five stressors are a massive casualty event, delayed physician support, overloads of routine work, overloads of assignments, and annoying paper work. Excessive work loading was found to be the primary source of stress. Nurses who were perceived to have greater stress levels were more inclined to deploy emotion-oriented approaches and more likely to seek job rotations. Professional stressors and problem-oriented approaches were positively correlated. Unlike other local studies, this study concludes that the excessive work-loading is more stressful in an AFH. Keywords—Emergency nurse; Job stressor; Coping behavior; Armed force hospital.",
"title": ""
},
{
"docid": "6f0d9f383c0142b43ea440e6efb2a59a",
"text": "OBJECTIVES\nTo evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs.\n\n\nMETHODS\nThirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis.\n\n\nRESULTS\nThe time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode.\n\n\nCLINICAL SIGNIFICANCE\nThe probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.",
"title": ""
},
{
"docid": "a5f80f6f36f8db1673ccc57de9044b5e",
"text": "Nowadays, many modern applications, e.g. autonomous system, and cloud data services need to capture and process a big amount of raw data at runtime that ultimately necessitates a high-performance computing model. Deep Neural Network (DNN) has already revealed its learning capabilities in runtime data processing for modern applications. However, DNNs are becoming more deep sophisticated models for gaining higher accuracy which require a remarkable computing capacity. Considering high-performance cloud infrastructure as a supplier of required computational throughput is often not feasible. Instead, we intend to find a near-sensor processing solution which will lower the need for network bandwidth and increase privacy and power efficiency, as well as guaranteeing worst-case response-times. Toward this goal, we introduce ADONN framework, which aims to automatically design a highly robust DNN architecture for embedded devices as the closest processing unit to the sensors. ADONN adroitly searches the design space to find improved neural architectures. Our proposed framework takes advantage of a multi-objective evolutionary approach, which exploits a pruned design space inspired by a dense architecture. Unlike recent works that mainly have tried to generate highly accurate networks, ADONN also considers the network size factor as the second objective to build a highly optimized network fitting with limited computational resource budgets while delivers comparable accuracy level. In comparison with the best result on CIFAR-10 dataset, a generated network by ADONN presents up to 26.4 compression rate while loses only 4% accuracy. In addition, ADONN maps the generated DNN on the commodity programmable devices including ARM Processor, High-Performance CPU, GPU, and FPGA.",
"title": ""
},
{
"docid": "5e756f85b15812daf80221c8b9ae6a96",
"text": "PURPOSE\nRural-dwelling cancer survivors (CSs) are at risk for decrements in health and well-being due to decreased access to health care and support resources. This study compares the impact of cancer in rural- and urban-dwelling adult CSs living in 2 regions of the Pacific Northwest.\n\n\nMETHODS\nA convenience sample of posttreatment adult CSs (N = 132) completed the Impact of Cancer version 2 (IOCv2) and the Memorial Symptom Assessment Scale-short form. High and low scorers on the IOCv2 participated in an in-depth interview (n = 19).\n\n\nFINDINGS\nThe sample was predominantly middle-aged (mean age 58) and female (84%). Mean time since treatment completion was 6.7 years. Cancer diagnoses represented included breast (56%), gynecologic (9%), lymphoma (8%), head and neck (6%), and colorectal (5%). Comparisons across geographic regions show statistically significant differences in body concerns, worry, negative impact, and employment concerns. Rural-urban differences from interview data include access to health care, care coordination, connecting/community, thinking about death and dying, public/private journey, and advocacy.\n\n\nCONCLUSION\nThe insights into the differences and similarities between rural and urban CSs challenge the prevalent assumptions about rural-dwelling CSs and their risk for negative outcomes. A common theme across the study findings was community. Access to health care may not be the driver of the survivorship experience. Findings can influence health care providers and survivorship program development, building on the strengths of both rural and urban living and the engagement of the survivorship community.",
"title": ""
},
{
"docid": "a910a28224ac10c8b4d2781a73849499",
"text": "The computing machine Z3, buHt by Konrad Zuse from 1938 to 1941, could only execute fixed sequences of floating-point arithmetical operations (addition, subtraction, multiplication, division and square root) coded in a punched tape. We show in this paper that a single program loop containing this type of instructions can simulate any Turing machine whose tape is of bounded size. This is achieved by simulating conditional branching and indirect addressing by purely arithmetical means. Zuse's Z3 is therefore, at least in principle, as universal as today's computers which have a bounded memory size. This result is achieved at the cost of blowing up the size of the program stored on punched tape. Universal Machines and Single Loops Nobody has ever built a universal computer. The reason is that a universal computer consists, in theory, of a fixed processor and a memory of unbounded size. This is the case of Turing machines with their unbounded tapes. In the theory of general recursive functions there is also a small set of rules and some predefined functions, but there is no upper bound on the size of intermediate reduction terms. Modern computers are only potentially universal: They can perform any computation that a Turing machine with a bounded tape can perform. If more storage is required, more can be added without having to modify the processor (provided that the extra memory is still addressable).",
"title": ""
},
{
"docid": "72a6001b54359139b565f0056bd0cfe2",
"text": "Porous CuO nanosheets were prepared on alumina tubes using a facile hydrothermal method, and their morphology, microstructure, and gas-sensing properties were investigated. The monoclinic CuO nanosheets had an average thickness of 62.5 nm and were embedded with numerous holes with diameters ranging from 5 to 17 nm. The porous CuO nanosheets were used to fabricate gas sensors to detect hydrogen sulfide (H2S) operating at room temperature. The sensor showed a good response sensitivity of 1.25 with respond/recovery times of 234 and 76 s, respectively, when tested with the H2S concentrations as low as 10 ppb. It also showed a remarkably high selectivity to the H2S, but only minor responses to other gases such as SO2, NO, NO2, H2, CO, and C2H5OH. The working principle of the porous CuO nanosheet based sensor to detect the H2S was identified to be the phase transition from semiconducting CuO to a metallic conducting CuS.",
"title": ""
},
{
"docid": "ccd663355ff6070b3668580150545cea",
"text": "In this paper, the user effects on mobile terminal antennas at 28 GHz are statistically investigated with the parameters of body loss, coverage efficiency, and power in the shadow. The data are obtained from the measurements of 12 users in data and talk modes, with the antenna placed on the top and bottom of the chassis. In the measurements, the users hold the phone naturally. The radiation patterns and shadowing regions are also studied. It is found that a significant amount of power can propagate into the shadow of the user by creeping waves and diffractions. A new metric is defined to characterize this phenomenon. A mean body loss of 3.2–4 dB is expected in talk mode, which is also similar to the data mode with the bottom antenna. A body loss of 1 dB is expected in data mode with the top antenna location. The variation of the body loss between the users at 28 GHz is less than 2 dB, which is much smaller than that of the conventional cellular bands below 3 GHz. The coverage efficiency is significantly reduced in talk mode, but only slightly affected in data mode.",
"title": ""
},
{
"docid": "717ea3390ffe3f3132d4e2230e645ee5",
"text": "Much of what is known about physiological systems has been learned using linear system theory. However, many biomedical signals are apparently random or aperiodic in time. Traditionally, the randomness in biological signals has been ascribed to noise or interactions between very large numbers of constituent components. One of the most important mathematical discoveries of the past few decades is that random behavior can arise in deterministic nonlinear systems with just a few degrees of freedom. This discovery gives new hope to providing simple mathematical models for analyzing, and ultimately controlling, physiological systems. The purpose of this chapter is to provide a brief pedagogic survey of the main techniques used in nonlinear time series analysis and to provide a MATLAB tool box for their implementation. Mathematical reviews of techniques in nonlinear modeling and forecasting can be found in Refs. 1-5. Biomedical signals that have been analyzed using these techniques include heart rate [6-8], nerve activity [9], renal flow [10], arterial pressure [11], electroencephalogram [12], and respiratory waveforms [13]. Section 2 provides a brief overview of dynamical systems theory including phase space portraits, Poincare surfaces of section, attractors, chaos, Lyapunov exponents, and fractal dimensions. The forced Duffing-Van der Pol oscillator (a ubiquitous model in engineering problems) is investigated as an illustrative example. Section 3 outlines the theoretical tools for time series analysis using dynamical systems theory. Reliability checks based on forecasting and surrogate data are also described. The time series methods are illustrated using data from the time evolution of one of the dynamical variables of the forced Duffing-Van der Pol oscillator. Section 4 concludes with a discussion of possible future directions for applications of nonlinear time series analysis in biomedical processes.",
"title": ""
},
{
"docid": "719c945e9f45371f8422648e0e81178f",
"text": "As technology in the cloud increases, there has been a lot of improvements in the maturity and firmness of cloud storage technologies. Many end-users and IT managers are getting very excited about the potential benefits of cloud storage, such as being able to store and retrieve data in the cloud and capitalizing on the promise of higher-performance, more scalable and cut-price storage. In this thesis, we present a typical Cloud Storage system architecture, a referral Cloud Storage model and Multi-Tenancy Cloud Storage model, value the past and the state-ofthe-art of Cloud Storage, and examine the Edge and problems that must be addressed to implement Cloud Storage. Use cases in diverse Cloud Storage offerings were also abridged. KEYWORDS—Cloud Storage, Cloud Computing, referral model, Multi-Tenancy, survey",
"title": ""
},
{
"docid": "f83f5eaa47f4634311297886b8e2228c",
"text": "Purpose of this study is to determine whether cash flow impacts business failure prediction using the BP models (Altman z-score, or Neural Network, or any of the BP models which could be implemented having objective to predict the financial distress or more complex financial failure-bankruptcy of the banks or companies). Units of analysis are financial ratios derived from raw financial data: B/S, P&L statements (income statements) and cash flow statements of both failed and non-failed companies/corporates that have been collected from the auditing resources and reports performed. A number of these studies examined whether a cash flow improve the prediction of business failure. The authors would have the objective to show the evidence and usefulness and efficacy of statistical models such as Altman Z-score discriminant analysis bankruptcy predictive models to assess client on going concern status. Failed and non-failed companies were selected for analysis to determine whether the cash flow improves the business failure prediction aiming to proof that the cash flow certainly makes better financial distress and bankruptcy prediction possible. Key-Words: bankruptcy prediction, financial distress, financial crisis, transition economy, auditing statement, balance sheet, profit and loss accounts, income statements",
"title": ""
},
{
"docid": "e373e44d5d4445ca56a45b4800b93740",
"text": "In recent years a great deal of research efforts in ship hydromechanics have been devoted to practical navigation problems in moving larger ships safely into existing harbours and inland waterways and to ease congestion in existing shipping routes. The starting point of any navigational or design analysis lies in the accurate determination of the hydrodynamic forces generated on the ship hull moving in confined waters. The analysis of such ship motion should include the effects of shallow water. An area of particular interest is the determination of ship resistance in shallow or restricted waters at different speeds, forming the basis for the power calculation and design of the propulsion system. The present work describes the implementation of CFD techniques for determining the shallow water resistance of a river-sea ship at different speeds. The ship hull flow is analysed for different ship speeds in shallow water conditions. The results obtained from CFD analysis are compared with available standard results.",
"title": ""
},
{
"docid": "7cb61609adf6e3c56c762d6fe322903c",
"text": "In this paper, we give an overview of the BitBlaze project, a new approach to computer security via binary analysis. In particular, BitBlaze focuses on building a unified binary analysis platform and using it to provide novel solutions to a broad spectrum of different security problems. The binary analysis platform is designed to enable accurate analysis, provide an extensible architecture, and combines static and dynamic analysis as well as program verification techniques to satisfy the common needs of security applications. By extracting security-related properties from binary programs directly, BitBlaze enables a principled, root-cause based approach to computer security, offering novel and effective solutions, as demonstrated with over a dozen different security applications.",
"title": ""
},
{
"docid": "185ae8a2c89584385a810071c6003c15",
"text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.",
"title": ""
},
{
"docid": "13e8fd8e8462e4bbb267f909403f9872",
"text": "Ergative case, the special case of transitive subjects, rai ses questions not only for the theory of case but also for theories of subjectho od and transitivity. This paper analyzes the case system of Nez Perce, a ”three-way erg tiv ” language, with an eye towards a formalization of the category of transitive subject . I show that it is object agreement that is determinative of transitivity, an d hence of ergative case, in Nez Perce. I further show that the transitivity condition on ergative case must be coupled with a criterion of subjecthood that makes reference to participation in subject agreement, not just to origin in a high argument-structural position. These two results suggest a formalization of the transitive subject as that ar gument uniquely accessing both high and low agreement information, the former through its (agreement-derived) connection with T and the latter through its origin in the spe cifi r of a head associated with object agreement (v). In view of these findings, I ar gue that ergative case morphology should be analyzed not as the expression of a synt ctic primitive but as the morphological spell-out of subject agreement and objec t agreement on a nominal.",
"title": ""
},
{
"docid": "25bd1930de4141a4e80441d7a1ae5b89",
"text": "Since the release of Bitcoins as crypto currency, Bitcoin has played a prominent part in the media. However, not Bitcoin but the underlying technology blockchain offers the possibility to innovatively change industries. The decentralized structure of the blockchain is particularly suitable for implementing control and business processes in microgrids, using smart contracts and decentralized applications. This paper provides a state of the art survey overview of current blockchain technology based projects with the potential to revolutionize microgrids and provides a first attempt to technically characterize different start-up approaches. The most promising use case from the microgrid perspective is peer-to-peer trading, where energy is exchanged and traded locally between consumers and prosumers. An application concept for distributed PV generation is provided in this promising area.",
"title": ""
},
{
"docid": "b5f22614e5cd76a66b754fd79299493a",
"text": "We present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time \"twist\": after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of \"big data\". We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production today, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a \"big data\" platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle \"big\" as well as \"fast\" data.",
"title": ""
},
{
"docid": "22445127362a9a2b16521a4a48f24686",
"text": "This work introduces the engineering design of a device capable to detect serum turbidity. We hypothesized that an electronic, portable, and low cost device that can provide objective, quantitative measurements of serum turbidity might have the potential to improve the early detection of neonatal sepsis. The design features, testing methodologies, and the obtained results are described. The final electronic device was evaluated in two experiments. The first one consisted in recording the turbidity value measured by the device for different solutions with known concentrations and different degrees of turbidity. The second analysis demonstrates a positive correlation between visual turbidity estimation and electronic turbidity measurement. Furthermore, our device demonstrated high turbidity in serum from two neonates with sepsis (one with a confirmed positive blood culture; the other one with a clinical diagnosis). We conclude that our electronic device may effectively measure serum turbidity at the bedside. Future studies will widen the possibility of additional clinical implications.",
"title": ""
},
{
"docid": "04a074377c86a19f1d429704ee6ff3f3",
"text": "The nature of wireless network transmission and the emerging attacks are continuously creating or exploiting more vulnerabilities. Despite the fact that the security mechanisms and protocols are constantly upgraded and enhanced, the Small Office/Home Office (SOHO) environments that cannot afford a separate authentication system, and generally adopt the IEEE 802.11 Wi-Fi-Protected-Access-2/Pre-Shared-Key (WPA2-PSK) are still exposed to some attack categories such as de-authentication attacks that aim to push wireless client to re-authenticate to the Access Point (AP) and try to capture the keys exchanged during the handshake to compromise the network security. This kind of attack is impossible to detect or prevent in spite of having an Intrusion Detection and Prevention System (IDPS) installed on the client or on the AP, especially when the attack is not repetitive and is targeting only one client. This paper proposes a novel method which can mitigate and eliminate the risk of exposing the PSK to be captured during the re-authentication process by introducing a novel re-authentication protocol relying on an enhanced four-way handshake which does not require any hardware upgrade or heavy-weight cryptography affecting the network flexibility and performances.",
"title": ""
}
] |
scidocsrr
|
3712cd09117572df13f028a7163e7093
|
Cross-Language Authorship Attribution
|
[
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "b0991cd60b3e94c0ed3afede89e13f36",
"text": "It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%.",
"title": ""
}
] |
[
{
"docid": "f5bc721d2b63912307c4ad04fb78dd2c",
"text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even",
"title": ""
},
{
"docid": "52c7ac92b5da3b37e3d657afa3e06377",
"text": "Research on implicit cognition and addiction has expanded greatly during the past decade. This research area provides new ways to understand why people engage in behaviors that they know are harmful or counterproductive in the long run. Implicit cognition takes a different view from traditional cognitive approaches to addiction by assuming that behavior is often not a result of a reflective decision that takes into account the pros and cons known by the individual. Instead of a cognitive algebra integrating many cognitions relevant to choice, implicit cognition assumes that the influential cognitions are the ones that are spontaneously activated during critical decision points. This selective review highlights many of the consistent findings supporting predictive effects of implicit cognition on substance use and abuse in adolescents and adults; reveals a recent integration with dual-process models; outlines the rapid evolution of different measurement tools; and introduces new routes for intervention.",
"title": ""
},
{
"docid": "58c0456c8ae9045898aca67de9954659",
"text": "Channel sensing and spectrum allocation has long been of interest as a prospective addition to cognitive radios for wireless communications systems occupying license-free bands. Conventional approaches to cyclic spectral analysis have been proposed as a method for classifying signals for applications where the carrier frequency and bandwidths are unknown, but is, however, computationally complex and requires a significant amount of observation time for adequate performance. Neural networks have been used for signal classification, but only for situations where the baseband signal is present. By combining these techniques a more efficient and reliable classifier can be developed where a significant amount of processing is performed offline, thus reducing online computation. In this paper we take a renewed look at signal classification using spectral coherence and neural networks, the performance of which is characterized by Monte Carlo simulations",
"title": ""
},
{
"docid": "377aec61877995ad2b677160fa43fefb",
"text": "One of the major issues involved with communication is acoustic echo, which is actually a delayed version of sound reflected back to the source of sound hampering communication. Cancellation of these involve the use of acoustic echo cancellers involving adaptive filters governed by adaptive algorithms. This paper presents a review of some of the algorithms of acoustic echo cancellation covering their merits and demerits. Various algorithms like LMS, NLMS, FLMS, LLMS, RLS, AFA, LMF have been discussed. Keywords— Adaptive Filter, Acoustic Echo, LMS, NLMS, FX-LMS, AAF, LLMS, RLS.",
"title": ""
},
{
"docid": "d26091934bbc0192735e056cf150fc31",
"text": "An Approximate Minimum Degree ordering algorithm (AMD) for preordering a symmetric sparse matrix prior to numerical factorization is presented. We use techniques based on the quotient graph for matrix factorization that allow us to obtain computationally cheap bounds for the minimum degree. We show that these bounds are often equal to the actual degree. The resulting algorithm is typically much faster than previous minimum degree ordering algorithms, and produces results that are comparable in quality with the best orderings from other minimum degree algorithms. ENSEEIHT-IRIT, Toulouse, France. email: amestoy@enseeiht.fr. Computer and Information Sciences Department University of Florida, Gainesville, Florida, USA. phone: (904) 392-1481, email: davis@cis.ufl.edu. Technical reports and matrices are available via the World Wide Web at http://www.cis.ufl.edu/̃ davis, or by anonymous ftp at ftp.cis.ufl.edu:cis/tech-reports. Support for this project was provided by the National Science Foundation (ASC-9111263 and DMS-9223088). Portions of this work were supported by a post-doctoral grant from CERFACS. Rutherford Appleton Laboratory, Chilton, Didcot, Oxon. 0X11 0QX England, and European Center for Research and Advanced Training in Scientific Computation (CERFACS), Toulouse, France. email: isd@letterbox.rl.ac.uk. Technical reports, information on the Harwell Subroutine Library, and matrices are available via the World Wide Web at http://www.cis.rl.ac.uk/struct/ARCD/NUM.html, or by anonymous ftp at seamus.cc.rl.ac.uk/pub.",
"title": ""
},
{
"docid": "8e8ba9e3178d6f586f8d551b4ba52851",
"text": "Fake news, one of the biggest new-age problems has the potential to mould opinions and influence decisions. The proliferation of fake news on social media and Internet is deceiving people to an extent which needs to be stopped. The existing systems are inefficient in giving a precise statistical rating for any given news claim. Also, the restrictions on input and category of news make it less varied. This paper proposes a system that classifies unreliable news into different categories after computing an F-score. This system aims to use various NLP and classification techniques to help achieve maximum accuracy.",
"title": ""
},
{
"docid": "89d05b1f40431af3cc6e2a8e71880e6f",
"text": "Many test series have been developed to assess dog temperament and aggressive behavior, but most of them have been criticized for their relatively low predictive validity or being too long, stressful, and/or problematic to carry out. We aimed to develop a short and effective series of tests that corresponds with (a) the dog's bite history, and (b) owner evaluation of the dog's aggressive tendencies. Seventy-three pet dogs were divided into three groups by their biting history; non-biter, bit once, and multiple biter. All dogs were exposed to a short test series modeling five real-life situations: friendly greeting, take away bone, threatening approach, tug-of-war, and roll over. We found strong correlations between the in-test behavior and owner reports of dogs' aggressive tendencies towards strangers; however, the test results did not mirror the reported owner-directed aggressive tendencies. Three test situations (friendly greeting, take-away bone, threatening approach) proved to be effective in evoking specific behavioral differences according to dog biting history. Non-biters differed from biters, and there were also specific differences related to aggression and fear between the two biter groups. When a subsample of dogs was retested, the test revealed consistent results over time. We suggest that our test is adequate for a quick, general assessment of human-directed aggression in dogs, particularly to evaluate their tendency for aggressive behaviors towards strangers. Identifying important behavioral indicators of aggressive tendencies, this test can serve as a useful tool to study the genetic or neural correlates of human-directed aggression in dogs.",
"title": ""
},
{
"docid": "5b148dd9f45a52d2961f348adf39e0ad",
"text": "Research suggesting the beneficial effects of yoga on myriad aspects of psychological health has proliferated in recent years, yet there is currently no overarching framework by which to understand yoga's potential beneficial effects. Here we provide a theoretical framework and systems-based network model of yoga that focuses on integration of top-down and bottom-up forms of self-regulation. We begin by contextualizing yoga in historical and contemporary settings, and then detail how specific components of yoga practice may affect cognitive, emotional, behavioral, and autonomic output under stress through an emphasis on interoception and bottom-up input, resulting in physical and psychological health. The model describes yoga practice as a comprehensive skillset of synergistic process tools that facilitate bidirectional feedback and integration between high- and low-level brain networks, and afferent and re-afferent input from interoceptive processes (somatosensory, viscerosensory, chemosensory). From a predictive coding perspective we propose a shift to perceptual inference for stress modulation and optimal self-regulation. We describe how the processes that sub-serve self-regulation become more automatized and efficient over time and practice, requiring less effort to initiate when necessary and terminate more rapidly when no longer needed. To support our proposed model, we present the available evidence for yoga affecting self-regulatory pathways, integrating existing constructs from behavior theory and cognitive neuroscience with emerging yoga and meditation research. This paper is intended to guide future basic and clinical research, specifically targeting areas of development in the treatment of stress-mediated psychological disorders.",
"title": ""
},
{
"docid": "4f8a52941e24de8ce82ba31cd3250deb",
"text": "BACKGROUND\nThere is an increasing use of technology for teaching and learning in medical education but often the use of educational theory to inform the design is not made explicit. The educational theories, both normative and descriptive, used by medical educators determine how the technology is intended to facilitate learning and may explain why some interventions with technology may be less effective compared with others.\n\n\nAIMS\nThe aim of this study is to highlight the importance of medical educators making explicit the educational theories that inform their design of interventions using technology.\n\n\nMETHOD\nThe use of illustrative examples of the main educational theories to demonstrate the importance of theories informing the design of interventions using technology.\n\n\nRESULTS\nHighlights the use of educational theories for theory-based and realistic evaluations of the use of technology in medical education.\n\n\nCONCLUSION\nAn explicit description of the educational theories used to inform the design of an intervention with technology can provide potentially useful insights into why some interventions with technology are more effective than others. An explicit description is also an important aspect of the scholarship of using technology in medical education.",
"title": ""
},
{
"docid": "5e3cbb89e7ba026d6f60a19aca8be4b8",
"text": "This paper presents for the first time, the design of a dual band PIFA antenna for 5G applications on a low-cost substrate with smallest form factor and widest bandwidth in both bands (28 GHz and 38 GHz). The proposed dual band PIFA antenna consists of a shorted patch and a modified U-shaped slot in the patch. The antenna shows good matching at and around both center frequencies. The antenna shows clean radiation pattern and bandwidth of 3.34 GHz and 1.395 GHz and gain of 3.75 dBi and 5.06 dBi at 28 and 38 GHz respectively. This antenna has ultra-small form factor of 1.3 mm × 1.2 mm. Patch is shorted at one end with a metallic cylindrical via. A CPW line and a feeding via are used on the bottom side of the substrate to excite the PIFA antenna patterned on the top side of the substrate which also facilitate the measurements of the antenna at mm-wave frequencies. The antenna was designed on low cost Isola FR406 substrate.",
"title": ""
},
{
"docid": "9435908ab7c10a858c223d3f08b87e74",
"text": "The recent success of deep neural networks (DNNs) in speech recognition can be attributed largely to their ability to extract a specific form of high-level features from raw acoustic data for subsequent sequence classification or recognition tasks. Among the many possible forms of DNN features, what forms are more useful than others and how effective these DNN features are in connection with the different types of downstream sequence recognizers remained unexplored and are the focus of this paper. We report our recent work on the construction of a diverse set of DNN features, including the vectors extracted from the output layer and from various hidden layers in the DNN. We then apply these features as the inputs to four types of classifiers to carry out the identical sequence classification task of phone recognition. The experimental results show that the features derived from the top hidden layer of the DNN perform the best for all four classifiers, especially for the autoregressive-moving-average (ARMA) version of a recurrent neural network. The feature vector derived from the DNN's output layer performs slightly worse but better than any of the hidden layers in the DNN except the top one.",
"title": ""
},
{
"docid": "417307155547a565d03d3f9c2a235b2e",
"text": "Recent deep learning based methods have achieved the state-of-the-art performance for handwritten Chinese character recognition (HCCR) by learning discriminative representations directly from raw data. Nevertheless, we believe that the long-and-well investigated domain-specific knowledge should still help to boost the performance of HCCR. By integrating the traditional normalization-cooperated direction-decomposed feature map (directMap) with the deep convolutional neural network (convNet), we are able to obtain new highest accuracies for both online and offline HCCR on the ICDAR-2013 competition database. With this new framework, we can eliminate the needs for data augmentation and model ensemble, which are widely used in other systems to achieve their best results. This makes our framework to be efficient and effective for both training and testing. Furthermore, although directMap+convNet can achieve the best results and surpass human-level performance, we show that writer adaptation in this case is still effective. A new adaptation layer is proposed to reduce the mismatch between training and test data on a particular source layer. The adaptation process can be efficiently and effectively implemented in an unsupervised manner. By adding the adaptation layer into the pre-trained convNet, it can adapt to the new handwriting styles of particular writers, and the recognition accuracy can be further improved consistently and significantly. This paper gives an overview and comparison of recent deep learning based approaches for HCCR, and also sets new benchmarks for both online and offline HCCR.",
"title": ""
},
{
"docid": "8f0ed599cec42faa0928a0931ee77b28",
"text": "This paper describes the Connector and Acceptor patterns. The intent of these patterns is to decouple the active and passive connection roles, respectively, from the tasks a communication service performs once connections are established. Common examples of communication services that utilize these patterns include WWW browsers, WWW servers, object request brokers, and “superservers” that provide services like remote login and file transfer to client applications. This paper illustrates how the Connector and Acceptor patterns can help decouple the connection-related processing from the service processing, thereby yielding more reusable, extensible, and efficient communication software. When used in conjunction with related patterns like the Reactor [1], Active Object [2], and Service Configurator [3], the Acceptor and Connector patterns enable the creation of highly extensible and efficient communication software frameworks [4] and applications [5]. This paper is organized as follows: Section 2 outlines background information on networking and communication protocols necessary to appreciate the patterns in this paper; Section 3 motivates the need for the Acceptor and Connector patterns and illustrates how they have been applied to a production application-level Gateway; Section 4 describes the Acceptor and Connector patterns in detail; and Section 5 presents concluding remarks.",
"title": ""
},
{
"docid": "2922158c41eed229f4beeb2ea130c108",
"text": "Automatically generating captions of an image is a fundamental problem in computer vision and natural language processing, which translates the content of the image into natural language with correct grammar and structure. Attention-based model has been widely adopted for captioning tasks. Most attention models generate only single certain attention heat map for indicating eyes where to see. However, these models ignore the endogenous orienting which depends on the interests, goals or desires of the observers, and constrain the diversity of captions. To improve both the accuracy and diversity of the generated sentences, we present a novel endogenous–exogenous attention architecture to capture both the endogenous attention, which indicates stochastic visual orienting, and the exogenous attention, which indicates deterministic visual orienting. At each time step, our model generates two attention maps, endogenous heat map and exogenous heat map, and then fuses them into hidden state of LSTM for sequential word generation. We evaluate our model on the Flickr30k and MSCOCO datasets, and experiments show the accuracy of the model and the diversity of captions it learns. Our model achieves better performance over state-of-the-art methods.",
"title": ""
},
{
"docid": "e4c27a97a355543cf113a16bcd28ca50",
"text": "A metamaterial-based broadband low-profile grid-slotted patch antenna is presented. By slotting the radiating patch, a periodic array of series capacitor loaded metamaterial patch cells is formed, and excited through the coupling aperture in a ground plane right underneath and parallel to the slot at the center of the patch. By exciting two adjacent resonant modes simultaneously, broadband impedance matching and consistent radiation are achieved. The dispersion relation of the capacitor-loaded patch cell is applied in the mode analysis. The proposed grid-slotted patch antenna with a low profile of 0.06 λ0 (λ0 is the center operating wavelength in free space) achieves a measured bandwidth of 28% for the |S11| less than -10 dB and maximum gain of 9.8 dBi.",
"title": ""
},
{
"docid": "44750e99b005ccf18b221576fa7304e7",
"text": "Due to the diversity of natural language processing (NLP) tools and resources, combining them into processing pipelines is an important issue, and sharing these pipelines with others remains a problem. We present DKPro Core, a broad-coverage component collection integrating a wide range of third-party NLP tools and making them interoperable. Contrary to other recent endeavors that rely heavily on web services, our collection consists only of portable components distributed via a repository, making it particularly interesting with respect to sharing pipelines with other researchers, embedding NLP pipelines in applications, and the use on high-performance computing clusters. Our collection is augmented by a novel concept for automatically selecting and acquiring resources required by the components at runtime from a repository. Based on these contributions, we demonstrate a way to describe a pipeline such that all required software and resources can be automatically obtained, making it easy to share it with others, e.g. in order to reproduce results or as examples in teaching, documentation, or publications.",
"title": ""
},
{
"docid": "2b595cab271cac15ea165e46459d6923",
"text": "Autonomous Mobility On Demand (MOD) systems can utilize fleet management strategies in order to provide a high customer quality of service (QoS). Previous works on autonomous MOD systems have developed methods for rebalancing single capacity vehicles, where QoS is maintained through large fleet sizing. This work focuses on MOD systems utilizing a small number of vehicles, such as those found on a campus, where additional vehicles cannot be introduced as demand for rides increases. A predictive positioning method is presented for improving customer QoS by identifying key locations to position the fleet in order to minimize expected customer wait time. Ridesharing is introduced as a means for improving customer QoS as arrival rates increase. However, with ridesharing perceived QoS is dependent on an often unknown customer preference. To address this challenge, a customer ratings model, which learns customer preference from a 5-star rating, is developed and incorporated directly into a ridesharing algorithm. The predictive positioning and ridesharing methods are applied to simulation of a real-world campus MOD system. A combined predictive positioning and ridesharing approach is shown to reduce customer service times by up to 29%. and the customer ratings model is shown to provide the best overall MOD fleet management performance over a range of customer preferences.",
"title": ""
},
{
"docid": "4df6bbfaa8842d88df0b916946c59ea3",
"text": "Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. \n We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.",
"title": ""
},
{
"docid": "5a85db36e049c371f0b0e689e7e73d4a",
"text": "Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm.While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore the systems-level challenges in achieving scalable, fault-tolerant quantum computation. In this lecture,we provide an engineering-oriented introduction to quantum computation with an overview of the theory behind key quantum algorithms. Next, we look at architectural case studies based upon experimental data and future projections for quantum computation implemented using trapped ions. While we focus here on architectures targeted for realization using trapped ions, the techniques for quantum computer architecture design, quantum fault-tolerance, and compilation described in this lecture are applicable to many other physical technologies that may be viable candidates for building a large-scale quantum computing system. We also discuss general issues involved with programming a quantum computer as well as a discussion of work on quantum architectures based on quantum teleportation. Finally, we consider some of the open issues remaining in the design of quantum computers.",
"title": ""
},
{
"docid": "66154317ab348562536ab44fa94d2520",
"text": "We describe a prototype dialogue response generation model for the customer service domain at Amazon. The model, which is trained in a weakly supervised fashion, measures the similarity between customer questions and agent answers using a dual encoder network, a Siamese-like neural network architecture. Answer templates are extracted from embeddings derived from past agent answers, without turn-by-turn annotations. Responses to customer inquiries are generated by selecting the best template from the final set of templates. We show that, in a closed domain like customer service, the selected templates cover >70% of past customer inquiries. Furthermore, the relevance of the model-selected templates is significantly higher than templates selected by a standard tf-idf baseline.",
"title": ""
}
] |
scidocsrr
|
4864cdc1637fc456cf8df6c033a6b441
|
What Enterprise Architecture Can Bring for Digital Transformation: An Exploratory Study
|
[
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "fad4ff82e9b11f28a70749d04dfbf8ca",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Enterprise architecture (EA) is the definition and representation of a high-level view of an enterprise's business processes and IT systems, their interrelationships, and the extent to which these processes and systems are shared by different parts of the enterprise. EA aims to define a suitable operating platform to support an organisation's future goals and the roadmap for moving towards this vision. Despite significant practitioner interest in the domain, understanding the value of EA remains a challenge. Although many studies make EA benefit claims, the explanations of why and how EA leads to these benefits are fragmented, incomplete, and not grounded in theory. This article aims to address this knowledge gap by focusing on the question: How does EA lead to organisational benefits? Through a careful review of EA literature, the paper consolidates the fragmented knowledge on EA benefits and presents the EA Benefits Model (EABM). The EABM proposes that EA leads to organisational benefits through its impact on four benefit enablers: Organisational Alignment, Information Availability, Resource Portfolio Optimisation, and Resource Complementarity. The article concludes with a discussion of a number of potential avenues for future research, which could build on the findings of this study.",
"title": ""
}
] |
[
{
"docid": "fbde8c336fe5d707d247faa51bb8c76c",
"text": "The paper approaches the problem of imageto-text with attention-based encoder-decoder networks that are trained to handle sequences of characters rather than words. We experiment on lines of text from a popular handwriting database with different attention mechanisms for the decoder. The model trained with softmax attention achieves the lowest test error, outperforming several other RNN-based models. Our results show that softmax attention is able to learn a linear alignment whereas the alignment generated by sigmoid attention is linear but much less precise.",
"title": ""
},
{
"docid": "c35f18d3cf5397962feed78f2ba00b28",
"text": "There exists significant variation between sign language recognition processes across the world, although there are many similarities. Pre-processing, feature extraction and classification are the three major steps involved in the sign language recognition process. An analysis of scientific literature indicates the potential of various methods in achieving significantly high accuracy in image recognition. Further examination of the literature indicates the voluminous works carried out in American Sign Language recognition systems and most of these works compare the potential of various methods and combination of methods for their accuracy. Although, the comparison using randomly selected gestures for their potential would result in realistic overall accuracy for ASL where the gestures are simple and distinct, the complete adoption of such methods for Indian Sign Language (ISL) recognition may not be ideal due to the complexity in ISL. Other than static gestures, the dynamic gestures, gestures including facial expression, similarity in gestures, all increase the complexity of ISL. Therefore, the potential of different methods and their combinations need to evaluate in the context of ISL. A preliminary study to analyse the potential of promising feature extraction methods indicated that the methods could vary significantly while handling gestures with resemblances. This clearly indicates the necessity to screen gesture recognition methods for their accuracy in handling gestures in the context of complex ISL. Keywords— Indian Sign Language, Gesture Recognition, Preprocessing, Feature Extraction, Classification.",
"title": ""
},
{
"docid": "25c412af8e072bf592ebfa1aa0168aa1",
"text": "One of the most promising strategies to improve the bioavailability of active pharmaceutical ingredients is based on the association of the drug with colloidal carriers, for example, polymeric nanoparticles, which are stable in biological environment, protective for encapsulated substances and able to modulate physicochemical characteristics, drug release and biological behaviour. The synthetic polymers possess unique properties due to their chemical structure. Some of them are characterized with mucoadhesiveness; another can facilitate the penetration through mucous layers; or to be stimuli responsive, providing controlled drug release at the target organ, tissues or cells; and all of them are biocompatible and versatile. These are suitable vehicles of nucleic acids, oligonucleotides, DNA, peptides and proteins. This chapter aims to look at the ‘hot spots’ in the design of synthetic polymer nanoparticles as an intelligent drug delivery system in terms of biopharmaceutical challenges and in relation to the route of their administration: the non-invasive—oral, transdermal, transmucosal (nasal, buccal/sublingual, vaginal, rectal and ocular) and inhalation routes—and the invasive parenteral route.",
"title": ""
},
{
"docid": "a8fabde6ef54212ea0a8d47727ecd388",
"text": "An alternative circuit analysis technique is used to study networks with nonsinusoidal sources and linear loads. In contrast to the technique developed by Steinmetz, this method is supported by geometric algebra instead of the algebra of complex numbers, uses multivectors in place of phasors and is performed in the GN domain instead of the frequency domain. The advantages of this method over the present technique involve: determining the flow of current and power quantities in the circuit, validating the results using the principle of conservation of energy, discerning and revealing other forms of reactive power generation, and the ability to design compensators with great flexibility. The power equation is composed of the active power and the CN -power representing the nonactive power. All the CN-power terms are sorted into reactive power terms due to phase shift, reactive power terms due to harmonic interactions and degrading power terms which determine the new quantity called degrading power. This decomposition shows that estimating these quantities is intricate. It also displays the power equation's functionality for power factor improvement. The geometric addition of power quantities is not pre-established but results from applying the established norm and yields the new quantity called net apparent power.",
"title": ""
},
{
"docid": "5571389dcc25cbcd9c68517934adce1d",
"text": "The polysaccharide-containing extracellular fractions (EFs) of the edible mushroom Pleurotus ostreatus have immunomodulating effects. Being aware of these therapeutic effects of mushroom extracts, we have investigated the synergistic relations between these extracts and BIAVAC and BIAROMVAC vaccines. These vaccines target the stimulation of the immune system in commercial poultry, which are extremely vulnerable in the first days of their lives. By administrating EF with polysaccharides from P. ostreatus to unvaccinated broilers we have noticed slow stimulation of maternal antibodies against infectious bursal disease (IBD) starting from four weeks post hatching. For the broilers vaccinated with BIAVAC and BIAROMVAC vaccines a low to almost complete lack of IBD maternal antibodies has been recorded. By adding 5% and 15% EF in the water intake, as compared to the reaction of the immune system in the previous experiment, the level of IBD antibodies was increased. This has led us to believe that by using this combination of BIAVAC and BIAROMVAC vaccine and EF from P. ostreatus we can obtain good results in stimulating the production of IBD antibodies in the period of the chicken first days of life, which are critical to broilers' survival. This can be rationalized by the newly proposed reactivity biological activity (ReBiAc) principles by examining the parabolic relationship between EF administration and recorded biological activity.",
"title": ""
},
{
"docid": "0b70a4a44a26ff9218224727fbba823c",
"text": "Recently, DNN model compression based on network architecture design, e.g., SqueezeNet, attracted a lot attention. No accuracy drop on image classification is observed on these extremely compact networks, compared to well-known models. An emerging question, however, is whether these model compression techniques hurt DNNs learning ability other than classifying images on a single dataset. Our preliminary experiment shows that these compression methods could degrade domain adaptation (DA) ability, though the classification performance is preserved. Therefore, we propose a new compact network architecture and unsupervised DA method in this paper. The DNN is built on a new basic module Conv-M which provides more diverse feature extractors without significantly increasing parameters. The unified framework of our DA method will simultaneously learn invariance across domains, reduce divergence of feature representations, and adapt label prediction. Our DNN has 4.1M parameters, which is only 6.7% of AlexNet or 59% of GoogLeNet. Experiments show that our DNN obtains GoogLeNet-level accuracy both on classification and DA, and our DA method slightly outperforms previous competitive ones. Put all together, our DA strategy based on our DNN achieves state-of-the-art on sixteen of total eighteen DA tasks on popular Office-31 and Office-Caltech datasets.",
"title": ""
},
{
"docid": "19a1a5d69037f0072f67c785031b0881",
"text": "In recent years, advances in the design of convolutional neural networks have resulted in signicant improvements on the image classication and object detection problems. One of the advances is networks built by stacking complex cells, seen in such networks as InceptionNet and NasNet. ese cells are either constructed by hand, generated by generative networks or discovered by search. Unlike conventional networks (where layers consist of a convolution block, sampling and non linear unit), the new cells feature more complex designs consisting of several lters and other operators connected in series and parallel. Recently, several cells have been proposed or generated that are supersets of previously proposed custom or generated cells. Inuenced by this, we introduce a network construction method based on EnvelopeNets. An EnvelopeNet is a deep convolutional neural network of stacked EnvelopeCells. EnvelopeCells are supersets (or envelopes) of previously proposed handcraed and generated cells. We propose a method to construct improved network architectures by restructuring EnvelopeNets. e algorithm restructures an EnvelopeNet by rearranging blocks in the network. It identies blocks to be restructured using metrics derived from the featuremaps collected during a partial training run of the EnvelopeNet. e method requires less computation resources to generate an architecture than an optimized architecture search over the entire search space of blocks. e restructured networks have higher accuracy on the image classication problem on a representative dataset than both the generating EnvelopeNet and an equivalent arbitrary network.",
"title": ""
},
{
"docid": "06f6ffa9c1c82570b564e1cd0f719950",
"text": "Widespread use of biometric architectures implies the need to secure highly sensitive data to respect the privacy rights of the users. In this paper, we discuss the following question: To what extent can biometric designs be characterized as Privacy Enhancing Technologies? The terms of privacy and security for biometric schemes are defined, while current regulations for the protection of biometric information are presented. Additionally, we analyze and compare cryptographic techniques for secure biometric designs. Finally, we introduce a privacy-preserving approach for biometric authentication in mobile electronic financial applications. Our model utilizes the mechanism of pseudonymous biometric identities for secure user registration and authentication. We discuss how the privacy requirements for the processing of biometric data can be met in our scenario. This work attempts to contribute to the development of privacy-by-design biometric technologies.",
"title": ""
},
{
"docid": "93ae39ed7b4d6b411a2deb9967e2dc7d",
"text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.",
"title": ""
},
{
"docid": "5f2b4caef605ab07ca070552e308d6e6",
"text": "The objective of CLEF is to promote research in the field of multilingual system development. This is done through the organisation of annual evaluation campaigns in which a series of tracks designed to test different aspects of monoand cross-language information retrieval (IR) are offered. The intention is to encourage experimentation with all kinds of multilingual information access – from the development of systems for monolingual retrieval operating on many languages to the implementation of complete multilingual multimedia search services. This has been achieved by offering an increasingly complex and varied set of evaluation tasks over the years. The aim is not only to meet but also to anticipate the emerging needs of the R&D community and to encourage the development of next generation multilingual IR systems. These Working Notes contain descriptions of the experiments conducted within CLEF 2006 – the sixth in a series of annual system evaluation campaigns. The results of the experiments will be presented and discussed in the CLEF 2006 Workshop, 20-22 September, Alicante, Spain. The final papers revised and extended as a result of the discussions at the Workshop together with a comparative analysis of the results will appear in the CLEF 2006 Proceedings, to be published by Springer in their Lecture Notes for Computer Science series. As from CLEF 2005, the Working Notes are published in electronic format only and are distributed to participants at the Workshop on CD-ROM together with the Book of Abstracts in printed form. All reports included in the Working Notes will also be inserted in the DELOS Digital Library, accessible at http://delos-dl.isti.cnr.it. Both Working Notes and Book of Abstracts are divided into eight sections, corresponding to the CLEF 2006 evaluation tracks. In addition appendices are included containing run statistics for the Ad Hoc, Domain-Specific, GeoCLEF and QA tracks, plus a list of all participating groups showing in which track they took part. The main features of the 2006 campaign are briefly outlined here below in order to provide the necessary background to the experiments reported in the rest of the Working Notes.",
"title": ""
},
{
"docid": "e7a6bb8f63e35f3fb0c60bdc26817e03",
"text": "A simple mechanism is presented, based on ant-like agents, for routing and load balancing in telecommunications networks, following the initial works of Appleby and Stewart (1994) and Schoonderwoerd et al. (1997). In the present work, agents are very similar to those proposed by Schoonderwoerd et al. (1997), but are supplemented with a simplified dynamic programming capability, initially experimented by Guérin (1997) with more complex agents, which is shown to significantly improve the network's relaxation and its response to perturbations. Topic area: Intelligent agents and network management",
"title": ""
},
{
"docid": "108a3f06052f615a7ebfc561c3c87cfc",
"text": "There are an estimated 0.5-1 million mite species on earth. Among the many mites that are known to affect humans and animals, only a subset are parasitic but these can cause significant disease. We aim here to provide an overview of the most recent work in this field in order to identify common biological features of these parasites and to inform common strategies for future research. There is a critical need for diagnostic tools to allow for better surveillance and for drugs tailored specifically to the respective parasites. Multi-'omics' approaches represent a logical and timely strategy to identify the appropriate mite molecules. Recent advances in sequencing technology enable us to generate de novo genome sequence data, even from limited DNA resources. Consequently, the field of mite genomics has recently emerged and will now rapidly expand, which is a particular advantage for parasitic mites that cannot be cultured in vitro. Investigations of the microbiota associated with mites will elucidate the link between parasites and pathogens, and define the role of the mite in transmission and pathogenesis. The databases generated will provide the crucial knowledge essential to design novel diagnostic tools, control measures, prophylaxes, drugs and immunotherapies against the mites and associated secondary infections.",
"title": ""
},
{
"docid": "8be957572c846ddda107d8343094401b",
"text": "Corporate accounting statements provide financial markets, and tax services with valuable data on the economic health of companies, although financial indices are only focused on a very limited part of the activity within the company. Useful tools in the field of processing extended financial and accounting data are the methods of Artificial Intelligence, aiming the efficient delivery of financial information to tax services, investors, and financial markets where lucrative portfolios can be created. Key-words: Financial Indices, Artificial Intelligence, Data Mining, Neural Networks, Genetic Algorithms",
"title": ""
},
{
"docid": "c77b2092daceab26611e427facd8e6fb",
"text": "Transactional Memory (TM) is on its way to becoming the programming API of choice for writing correct, concurrent, and scalable programs. Hardware TM (HTM) implementations are expected to be significantly faster than pure software TM (STM); however, full hardware support for true closed and open nested transactions is unlikely to be practical.\n This paper presents a novel mechanism, the split hardware transaction (SpHT), that uses minimal software support to combine multiple segments of an atomic block, each executed using a separate hardware transaction, into one atomic operation. The idea of segmenting transactions can be used for many purposes, including nesting, local retry, orElse, and user-level thread scheduling; in this paper we focus on how it allows linear closed and open nesting of transactions. SpHT overcomes the limited expressive power of best-effort HTM while imposing overheads dramatically lower than STM and preserving useful guarantees such as strong atomicity provided by the underlying HTM.",
"title": ""
},
{
"docid": "9110970e05ed5f5365d613f6f8f2c8ba",
"text": "Abstrak –The objective of this paper is a new MeanMedian filtering for denoising extremely corrupted images by impulsive noise. Whenever an image is converted from one form to another, some of degradation occurs at the output. Improvement in the quality of these degraded images can be achieved by the application of Restoration and /or Enhancement techniques. Noise removing is one of the categories of Enhancement. Removing noise from the original signal is still a challenging problem. Mean filtering fails to effectively remove heavy tailed noise & performance poorly in the presence of signal dependent noise. The successes of median filters are edge preservation and efficient attenuation of impulsive noise. An important shortcoming of the median filter is that the output is one of the samples in the input window. Based on this mixture distributions are proposed to effectively remove impulsive noise characteristics. Finally, the results of comparative analysis of mean-median algorithm with mean, median filters for impulsive noise removal show a high efficiency of this approach relatively to other ones.",
"title": ""
},
{
"docid": "1839d9e6ef4bad29381105f0a604b731",
"text": "Our focus is on the effects that dated ideas about the nature of science (NOS) have on curriculum, instruction and assessments. First we examine historical developments in teaching about NOS, beginning with the seminal ideas of James Conant. Next we provide an overview of recent developments in philosophy and cognitive sciences that have shifted NOS characterizations away from general heuristic principles toward cognitive and social elements. Next, we analyze two alternative views regarding ‘explicitly teaching’ NOS in pre-college programs. Version 1 is grounded in teachers presenting ‘Consensus-based Heuristic Principles’ in science lessons and activities. Version 2 is grounded in learners experience of ‘Building and Refining Model-Based Scientific Practices’ in critique and communication enactments that occur in longer immersion units and learning progressions. We argue that Version 2 is to be preferred over Version 1 because it develops the critical epistemic cognitive and social practices that scientists and science learners use when (1) developing and evaluating scientific evidence, explanations and knowledge and (2) critiquing and communicating scientific ideas and information; thereby promoting science literacy. 1 NOS and Science Education When and how did knowledge about science, as opposed to scientific content knowledge, become a targeted outcome of science education? From a US perspective, the decades of interest are the 1940s and 1950s when two major post-war developments in science education policy initiatives occurred. The first, in post secondary education, was the GI Bill An earlier version of this paper was presented as a plenary session by the first author at the ‘How Science Works—And How to Teach It’ workshop, Aarhus University, 23–25 June, 2011, Denmark. R. A. Duschl (&) The Pennsylvania State University, University Park, PA, USA e-mail: rad19@psu.edu R. Grandy Rice University, Houston, TX, USA 123 Sci & Educ DOI 10.1007/s11191-012-9539-4",
"title": ""
},
{
"docid": "0ef117ca4663f523d791464dad9a7ebf",
"text": "In this paper, a circularly polarized, omnidirectional side-fed bifilar helix antenna, which does not require a ground plane is presented. The antenna has a height of less than 0.1λ and the maximum boresight gain of 1.95dB, with 3dB beamwidth of 93°. The impedance bandwidth of the antenna for VSWR≤2 (with reference to resonant input resistance of 25Ω) is 2.7%. The simulated axial ratio(AR) at the resonant frequency 860MHz is 0.9 ≤AR≤ 1.0 in the whole hemisphere except small region around the nulls. The polarization bandwidth for AR≤3dB is 34.7%. The antenna is especially useful for high speed aerodynamic bodies made of composite materials (such as UAVs) where low profile antennas are essential to reduce air resistance and/or proper metallic ground is not available for monopole-type antenna.",
"title": ""
},
{
"docid": "08cf1e6353fa3c9969188d946874c305",
"text": "In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.",
"title": ""
},
{
"docid": "16316dc13263ca7a45f5ff3682440cc6",
"text": "This paper describes a hybrid system that combines a powered lower limb exoskeleton with functional electrical stimulation (FES) for gait restoration in persons with paraplegia. The general control structure consists of two control loops: a motor control loop, which utilizes joint angle feedback control to control the output of the joint motor to track the desired joint trajectories, and a muscle control loop, which utilizes joint torque profiles from previous steps to shape the muscle stimulation profile for the subsequent step in order to minimize the motor torque contribution required for joint angle trajectory tracking. The implementation described here incorporates stimulation of the hamstrings and quadriceps muscles, such that the hip joints are actuated by the combination of hip motors and the hamstrings, and the knee joints are actuated by the combination of knee motors and the quadriceps. In order to demonstrate efficacy, the control approach was implemented on three paraplegic subjects with motor complete spinal cord injuries ranging from levels T6 to T10. Experimental data indicates that the cooperative control system provided consistent and repeatable gait motions and reduced the torque and power output required from the hip and knee motors of the exoskeleton compared to walking without FES.",
"title": ""
}
] |
scidocsrr
|
b5f96f56c07a9fde786dd82b27bb45cb
|
Solidus: An Incentive-compatible Cryptocurrency Based on Permissionless Byzantine Consensus
|
[
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1",
"text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.",
"title": ""
},
{
"docid": "9db9902c0e9d5fc24714554625a04c7a",
"text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.",
"title": ""
}
] |
[
{
"docid": "02f28b1237b88471b0d96e5ff3871dc4",
"text": "Data mining is becoming increasingly important since the size of databases grows even larger and the need to explore hidden rules from the databases becomes widely recognized. Currently database systems are dominated by relational database and the ability to perform data mining using standard SQL queries will definitely ease implementation of data mining. However the performance of SQL based data mining is known to fall behind specialized implementation and expensive mining tools being on sale. In this paper we present an evaluation of SQL based data mining on commercial RDBMS (IBM DB2 UDB EEE). We examine some techniques to reduce I/O cost by using View and Subquery. Those queries can be more than 6 times faster than SETM SQL query reported previously. In addition, we have made performance evaluation on parallel database environment and compared the performance result with commercial data mining tool (IBM Intelligent Miner). We prove that SQL based data mining can achieve sufficient performance by the utilization of SQL query customization and database tuning.",
"title": ""
},
{
"docid": "185dd20e40c5ed4784ab5e92dd85f639",
"text": "Bayesian methods have become widespread in marketing literature. We review the essence of the Bayesian approach and explain why it is particularly useful for marketing problems. While the appeal of the Bayesian approach has long been noted by researchers, recent developments in computational methods and expanded availability of detailed marketplace data has fueled the growth in application of Bayesian methods in marketing. We emphasize the modularity and flexibility of modern Bayesian approaches. The usefulness of Bayesian methods in situations in which there is limited information about a large number of units or where the information comes from different sources is noted. We include an extensive discussion of open issues and directions for future research. (Bayesian Statistics; Decision Theory; Marketing Models; Critical Review)",
"title": ""
},
{
"docid": "a0b147e6baae3ea7622446da0b8d8e26",
"text": "The Web has come a long way since its invention by Berners-Lee, when it focused essentially on visualization and presentation of content for human consumption (Syntactic Web), to a Web providing meaningful content, facilitating the integration between people and machines (Semantic Web). This paper presents a survey of different tools that provide the enrichment of the Web with understandable annotation, in order to make its content available and interoperable between systems. We can group Semantic Annotation tools into the diverse dimensions: dynamicity, storage, information extraction process, scalability and customization. The analysis of the different annotation tools shows that (semi-)automatic and automatic systems aren't as efficient as needed without human intervention and will continue to evolve to solve the challenge. Microdata, RDFa and the new HTML5 standard will certainly bring new contributions to this issue.",
"title": ""
},
{
"docid": "dcacbed90f45b76e9d40c427e16e89d6",
"text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.",
"title": ""
},
{
"docid": "53e6fe645eb83bcc0f86638ee7ce5578",
"text": "Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.",
"title": ""
},
{
"docid": "f70bd0a47eac274a1bb3b964f34e0a63",
"text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.",
"title": ""
},
{
"docid": "6610f89ba1776501d6c0d789703deb4e",
"text": "REVIEW QUESTION/OBJECTIVE\nThe objective of this review is to identify the effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospitalized patient care settings.\n\n\nBACKGROUND\nNursing professionals face extraordinary stressors in the medical environment. Many of these stressors have always been inherent to the profession: long work hours, dealing with pain, loss and emotional suffering, caring for dying patients and providing support to families. Recently nurses have been experiencing increased stress related to other factors such as staffing shortages, increasingly complex patients, corporate financial constraints and the increased need for knowledge of ever-changing technology. Stress affects high-level cognitive functions, specifically attention and memory, and this increases the already high stakes for nurses. Nurses are required to cope with very difficult situations that require accurate, timely decisions that affect human lives on a daily basis.Lapses in attention increase the risk of serious consequences such as medication errors, failure to recognize life-threatening signs and symptoms, and other essential patient safety issues. Research has also shown that the stress inherent to health care occupations can lead to depression, reduced job satisfaction, psychological distress and disruptions to personal relationships. These outcomes of stress are factors that create scenarios for risk of patient harm.There are three main effects of stress on nurses: burnout, depression and lateral violence. Burnout has been defined as a syndrome of depersonalization, emotional exhaustion, and a sense of low personal accomplishment, and the occurrence of burnout has been closely linked to perceived stress. Shimizu, Mizoue, Mishima and Nagata state that nurses experience considerable job stress which has been a major factor in the high rates of burnout that has been recorded among nurses. Zangaro and Soeken share this opinion and state that work related stress is largely contributing to the current nursing shortage. They report that work stress leads to a much higher turnover, especially during the first year after graduation, lowering retention rates in general.In a study conducted in Pennsylvania, researchers found that while 43% of the nurses who reported high levels of burnout indicated their intent to leave their current position, only 11% of nurses who were not burned out intended to leave in the following 12 months. In the same study patient-to-nurse ratios were significantly associated with emotional exhaustion and burnout. An increase of one patient per nurse assignment to a hospital's staffing level increased burnout by 23%.Depression can be defined as a mood disorder that causes a persistent feeling of sadness and loss of interest. Wang found that high levels of work stress were associated with higher risk of mood and anxiety disorders. In Canada one out of every 10 nurses have shown depressive symptoms; compared to the average of 5.1% of the nurses' counterparts who do not work in healthcare. High incidences of depression and depressive symptoms were also reported in studies among Chinese nurses (38%) and Taiwanese nurses (27.7%). In the Taiwanese study the occurrence of depression was significantly and positively correlated to job stress experienced by the nurses (p<0.001).In a multivariate logistic regression, Ohler, Kerr and Forbes also found that job stress was significantly correlated to depression in nurses. The researchers reported that nurses who experienced a higher degree of job stress were 80% more likely to have suffered a major depressive episode in the previous year. A further finding in this study revealed that 75% of the participants also suffered from at least one chronic disease revealing a strong association between depression and other major health issues.A stressful working environment, such as a hospital, could potentially lead to lateral violence among nurses. Lateral violence is a serious occupational health concern among nurses as evidenced by extensive research and literature available on the topic. The impact of lateral violence has been well studied and documented over the past three decades. Griffin and Clark state that lateral violence is a form of bullying grounded in the theoretical framework of the oppression theory. The bullying behaviors occur among members of an oppressed group as a result of feeling powerless and having a perceived lack of control in their workplace. Griffin identified the ten most common forms of lateral violence among nurses as \"non-verbal innuendo, verbal affront, undermining activities, withholding information, sabotage, infighting, scape-goating, backstabbing, failure to respect privacy, and broken confidences\". Nurse-to-nurse lateral violence leads to negative workplace relationships and disrupts team performance, creating an environment where poor patient outcomes, burnout and high staff turnover rates are prevalent.Work-related stressors have been indicated as a potential cause of lateral violence. According to the Effort Reward Imbalance model (ERI) developed by Siegrist, work stress develops when an imbalance exists between the effort individuals put into their jobs and the rewards they receive in return. The ERI model has been widely used in occupational health settings based on its predictive power for adverse health and well-being outcomes. The model claims that both high efforts with low rewards could lead to negative emotions in the exposed employees. Vegchel, van Jonge, de Bosma & Schaufeli state that, according to the ERI model, occupational rewards mostly consist of money, esteem and job security or career opportunities. A survey conducted by Reineck & Furino indicated that registered nurses had a very high regard for the intrinsic rewards of their profession but that they identified workplace relationships and stress issues as some of the most important contributors to their frustration and exhaustion. Hauge, Skogstad & Einarsen state that work-related stress further increases the potential for lateral violence as it creates a negative environment for both the target and the perpetrator.Mindfulness based programs have proven to be a promising intervention in reducing stress experienced by nurses. Mindfulness was originally defined by Jon Kabat-Zinn in 1979 as \"paying attention on purpose, in the present moment, and nonjudgmentally, to the unfolding of experience moment to moment\". The Mindfulness Based Stress Reduction (MBSR) program is an educationally based program that focuses on training in the contemplative practice of mindfulness. It is an eight-week program where participants meet weekly for two-and-a-half hours and join a one-day long retreat for six hours. The program incorporates a combination of mindfulness meditation, body awareness and yoga to help increase mindfulness in participants. The practice is meant to facilitate relaxation in the body and calming of the mind by focusing on present-moment awareness. The program has proven to be effective in reducing stress, improving quality of life and increasing self-compassion in healthcare professionals.Researchers have demonstrated that mindfulness interventions can effectively reduce stress, anxiety and depression in both clinical and non-clinical populations. In a meta-analysis of seven studies conducted with healthy participants from the general public, the reviewers reported a significant reduction in stress when the treatment and control groups were compared. However, there have been limited studies to date that focused specifically on the effectiveness of mindfulness programs to reduce stress experienced by nurses.In addition to stress reduction, mindfulness based interventions can also enhance nurses' capacity for focused attention and concentration by increasing present moment awareness. Mindfulness techniques can be applied in everyday situations as well as stressful situations. According to Kabat-Zinn, work-related stress influences people differently based on their viewpoint and their interpretation of the situation. He states that individuals need to be able to see the whole picture, have perspective on the connectivity of all things and not operate on automatic pilot to effectively cope with stress. The goal of mindfulness meditation is to empower individuals to respond to situations consciously rather than automatically.Prior to the commencement of this systematic review, the Cochrane Library and JBI Database of Systematic Reviews and Implementation Reports were searched. No previous systematic reviews on the topic of reducing stress experienced by nurses through mindfulness programs were identified. Hence, the objective of this systematic review is to evaluate the best research evidence available pertaining to mindfulness-based programs and their effectiveness in reducing perceived stress among nurses.",
"title": ""
},
{
"docid": "cd3c56e7e13a23e62986d40630f5a207",
"text": "The prediction of cellular function from a genotype is a fundamental goal in biology. For metabolism, constraint-based modelling methods systematize biochemical, genetic and genomic knowledge into a mathematical framework that enables a mechanistic description of metabolic physiology. The use of constraint-based approaches has evolved over ~30 years, and an increasing number of studies have recently combined models with high-throughput data sets for prospective experimentation. These studies have led to validation of increasingly important and relevant biological predictions. As reviewed here, these recent successes have tangible implications in the fields of microbial evolution, interaction networks, genetic engineering and drug discovery.",
"title": ""
},
{
"docid": "e2a605f5c22592bd5ca828d4893984be",
"text": "Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract humanunderstandable representations of network activations. We then build a bayesian causal model using these extracted concepts as variables in order to explain image classification. Finally, we use this causal model to identify and visualize features with significant causal influence on final classification.",
"title": ""
},
{
"docid": "d880535f198a1f0a26b18572f674b829",
"text": "Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.",
"title": ""
},
{
"docid": "3e7e40f82ebb83b4314c974334c8ce0c",
"text": "Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction error (measured as the Procrustes distance between the reconstructed shape and the ground-truth) of human faces is <inline-formula><tex-math notation=\"LaTeX\">$<.004$</tex-math><alternatives> <inline-graphic xlink:href=\"martinez-ieq1-2772922.gif\"/></alternatives></inline-formula>, cars is .0022, human bodies is .022, and highly-deformable flags is .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1,000 frames/s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points).",
"title": ""
},
{
"docid": "d5abd8f68a9f77ed84ec1381584357a4",
"text": "In this paper, we study how to test the intelligence of an autonomous vehicle. Comprehensive testing is crucial to both vehicle manufactories and customers. Existing testing approaches can be categorized into two kinds: scenario-based testing and functionality-based testing. We first discuss the shortcomings of these two kinds of approaches, and then propose a new testing framework to combine the benefits of them. Based on the new semantic diagram definition for the intelligence of autonomous vehicles, we explain how to design a task for autonomous vehicle testing and how to evaluate test results. Experiments show that this new approach provides a quantitative way to test the intelligence of an autonomous vehicle.",
"title": ""
},
{
"docid": "1ff9bf5a5a511a159cc1cc3623ad7f0a",
"text": "This paper illustrates the rectifier stress issue of the active clamped dual switch forward converters operating on discontinuous current mode (DCM), and analyzes the additional reverse voltage on the rectifier diode of active clamped dual switch forward converter at DCM operation, which does not appear in continuous current mode (CCM). The additional reverse voltage stress, plus its spikes, definitely causes many difficulties in designing high performance power supplies. In order to suppress this voltage spike to an acceptable level and improve the working conditions for the rectifier diode, this paper carefully explains and presents the working principles of active clamped dual switch forward converter in DCM operation, and theoretically analyzes the causes of the additional reverse voltage and its spikes. For conquering these difficulties, this paper also innovate active clamped snubber (ACS) cell to solve this issue. Furthermore, experiments on a 270W active clamped dual switch forward converter prototype were designed to validate the innovation. Finally, based on the similarities of the rectifier network in forward-topology based converters, this paper also extents the utility of this idea into even wider dc-dc converters.",
"title": ""
},
{
"docid": "267ee2186781941c1f9964afd07a956c",
"text": "Considerations in applying circuit breaker protection to DC systems are capacitive discharge, circuit breaker coordination and impacts of double ground faults. Test and analysis results show the potential for equipment damage. Solutions are proposed at the cost of increased integration between power conversion and protection systems.",
"title": ""
},
{
"docid": "84dee4781f7bc13711317d0594e97294",
"text": "We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an /2-orthogonal basis of Krylov subspaces. It can be considered as a generalization of Paige and Saunders' MINRES algorithm and is theoretically equivalent to the Generalized Conjugate Residual (GCR) method and to ORTHODIR. The new algorithm presents several advantages over GCR and ORTHODIR.",
"title": ""
},
{
"docid": "f0532446a19fb2fa28a7a01cddca7e37",
"text": "The use of rumble strips on roads can provide drivers lane departure warning (LDW). However, rumble strips require an infrastructure and do not exist on a majority of roadways. Therefore, it is very desirable to have an effective in-vehicle LDW system to detect when the driver is in danger of departing the road and then triggers an alarm to warn the driver early enough to take corrective action. This paper presents the development of an image-based LDW system using the Lucas-Kanade (L-K) optical flow and the Hough transform methods. Our approach integrates both techniques to establish an operation algorithm to determine whether a warning signal should be issued based on the status of the vehicle deviating from its heading lane. The L-K optical flow tracking is used when the lane boundaries cannot be detected, while the lane detection technique is used when they become available. Even though both techniques are used in the system, only one method is activated at any given time because each technique has its own advantages and also disadvantages. The developed LDW system was road tested on several rural highways and also one section of the interstate I35 freeway. Overall, the system operates correctly as expected with a false alarm occurred only roughly about 1.18% of the operation time. This paper presents the system implementation together with our findings. Key-Words: Lane departure warning, Lucas-Kanade optical flow, Hough transform.",
"title": ""
},
{
"docid": "49f35f840566645f5b86e90ce0a932af",
"text": "Over the past decade, a number of tools and systems have been developed to manage various aspects of the software development lifecycle. Until now, tool supported code review, an important aspect of software development, has been largely ignored. With the advent of open source code review tools such as Gerrit along with projects that use them, code review data is now available for collection, analysis, and triangulation with other software development data. In this paper, we extract Android peer review data from Gerrit. We describe the Android peer review process, the reverse engineering of the Gerrit JSON API, our data mining and cleaning methodology, database schema, and provide an example of how the data can be used to answer an empirical software engineering question. The database is available for use by the research community.",
"title": ""
},
{
"docid": "9a4bdfe80a949ec1371a917585518ae4",
"text": "This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with non-deterministic effects, concurrent actions, and continuous change.",
"title": ""
},
{
"docid": "d5eb643385b573706c48cbb2cb3262df",
"text": "This article identifies problems and conditions that contribute to nipple pain during lactation and that may lead to early cessation or noninitiation of breastfeeding. Signs and symptoms of poor latch-on and positioning, oral anomalies, and suckling disorders are reviewed. Diagnosis and treatment of infectious agents that may cause nipple pain are presented. Comfort measures for sore nipples and current treatment recommendations for nipple wound healing are discussed. Suggestions are made for incorporating in-depth breastfeeding content into midwifery education programs.",
"title": ""
},
{
"docid": "55158927c639ed62b53904b97a0f7a97",
"text": "Speech comprehension and production are governed by control processes. We explore their nature and dynamics in bilingual speakers with a focus on speech production. Prior research indicates that individuals increase cognitive control in order to achieve a desired goal. In the adaptive control hypothesis we propose a stronger hypothesis: Language control processes themselves adapt to the recurrent demands placed on them by the interactional context. Adapting a control process means changing a parameter or parameters about the way it works (its neural capacity or efficiency) or the way it works in concert, or in cascade, with other control processes (e.g., its connectedness). We distinguish eight control processes (goal maintenance, conflict monitoring, interference suppression, salient cue detection, selective response inhibition, task disengagement, task engagement, opportunistic planning). We consider the demands on these processes imposed by three interactional contexts (single language, dual language, and dense code-switching). We predict adaptive changes in the neural regions and circuits associated with specific control processes. A dual-language context, for example, is predicted to lead to the adaptation of a circuit mediating a cascade of control processes that circumvents a control dilemma. Effective test of the adaptive control hypothesis requires behavioural and neuroimaging work that assesses language control in a range of tasks within the same individual.",
"title": ""
}
] |
scidocsrr
|
8beaeac8d73368152b265296aeafc462
|
Abaqus Implementation of Extended Finite Element Method Using a Level Set Representation for Three-Dimensional Fatigue Crack Growth and Life Predictions
|
[
{
"docid": "407ef8fa4189f2f5ab7aa39fd5340a3d",
"text": "In this paper, we introduce an implementation of the extended finite element method for fracture problems within the finite element software ABAQUSTM. User subroutine (UEL) in Abaqus is used to enable the incorporation of extended finite element capabilities. We provide details on the data input format together with the proposed user element subroutine, which constitutes the core of the finite element analysis; however, pre-processing tools that are necessary for an X-FEM implementation, but not directly related to Abaqus, are not provided. In addition to problems in linear elastic fracture mechanics, non-linear frictional contact analyses are also realized. Several numerical examples in fracture mechanics are presented to demonstrate the benefits of the proposed implementation.",
"title": ""
}
] |
[
{
"docid": "0090413bf614e3dbeb97cfe0725446bc",
"text": "Imitation learning has proven to be useful for many real-world problems, but approaches such as behavioral cloning suffer from data mismatch and compounding error issues. One attempt to address these limitations is the DAGGER algorithm, which uses the state distribution induced by the novice to sample corrective actions from the expert. Such sampling schemes, however, require the expert to provide action labels without being fully in control of the system. This can decrease safety and, when using humans as experts, is likely to degrade the quality of the collected labels due to perceived actuator lag. In this work, we propose HG-DAGGER, a variant of DAGGER that is more suitable for interactive imitation learning from human experts in real-world systems. In addition to training a novice policy, HG-DAGGER also learns a safety threshold for a model-uncertainty-based risk metric that can be used to predict the performance of the fully trained novice in different regions of the state space. We evaluate our method on both a simulated and real-world autonomous driving task, and demonstrate improved performance over both DAGGER and behavioral cloning.",
"title": ""
},
{
"docid": "306a4933cb90ff914aedc93a09192be9",
"text": "Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.",
"title": ""
},
{
"docid": "d35cac8677052d0371d2863d54a59597",
"text": "A high-power short-pulse generator based on the diode step recovery phenomenon and high repetition rate discharges in a two-electrode gas discharge tube is presented. The proposed circuit is simple and low cost and driven by a low-power source. A full analysis of this generator is presented which, considering the nonlinear behavior of the gas tube, predicts the waveform of the output pulse. The proposed method has been shown to work properly by implementation of a kW-range prototype. Experimental measurements of the output pulse characteristics showed a rise time of 3.5 ns, with pulse repetition rate of 2.3 kHz for a 47- $\\Omega $ load. The input peak power was 2.4 W, which translated to about 0.65-kW output, showing more than 270 times increase in the pulse peak power. The efficiency of the prototype was 57%. The overall price of the employed components in the prototype was less than U.S. $2.0. An excellent agreement between the analytical and experimental test results was established. The analysis predicts that the proposed circuit can generate nanosecond pulses with more than 100-kW peak powers by using a subkW power supply.",
"title": ""
},
{
"docid": "55772e55adb83d4fd383ddebcf564a71",
"text": "The generation of multi-functional drug delivery systems, namely solid dosage forms loaded with nano-sized carriers, remains little explored and is still a challenge for formulators. For the first time, the coupling of two important technologies, 3D printing and nanotechnology, to produce innovative solid dosage forms containing drug-loaded nanocapsules was evaluated here. Drug delivery devices were prepared by fused deposition modelling (FDM) from poly(ε-caprolactone) (PCL) and Eudragit® RL100 (ERL) filaments with or without a channelling agent (mannitol). They were soaked in deflazacort-loaded nanocapsules (particle size: 138nm) to produce 3D printed tablets (printlets) loaded with them, as observed by SEM. Drug loading was improved by the presence of the channelling agent and a linear correlation was obtained between the soaking time and the drug loading (r2=0.9739). Moreover, drug release profiles were dependent on the polymeric material of tablets and the presence of the channelling agent. In particular, tablets prepared with a partially hollow core (50% infill) had a higher drug loading (0.27% w/w) and faster drug release rate. This study represents an original approach to convert nanocapsules suspensions into solid dosage forms as well as an efficient 3D printing method to produce novel drug delivery systems, as personalised nanomedicines.",
"title": ""
},
{
"docid": "2687cb8fc5cde18e53c580a50b33e328",
"text": "Social network sites (SNSs) are becoming an increasingly popular resource for both students and adults, who use them to connect with and maintain relationships with a variety of ties. For many, the primary function of these sites is to consume and distribute personal content about the self. Privacy concerns around sharing information in a public or semi-public space are amplified by SNSs’ structural characteristics, which may obfuscate the true audience of these disclosures due to their technical properties (e.g., persistence, searchability) and dynamics of use (e.g., invisible audiences, context collapse) (boyd, 2008b). Early work on the topic focused on the privacy pitfalls of Facebook and other SNSs (e.g., Acquisti & Gross, 2006; Barnes, 2006; Gross & Acquisti, 2005) and argued that individuals were (perhaps inadvertently) disclosing information that might be inappropriate for some audiences, such as future employers, or that might enable identity theft or other negative outcomes.",
"title": ""
},
{
"docid": "e4183c85a9f6771fa06316b002e13188",
"text": "This paper provides an analysis of some argumentation in a biomedical genetics research article as a step towards developing a corpus of articles annotated to support research on argumentation. We present a specification of several argumentation schemes and inter-argument relationships to be annotated.",
"title": ""
},
{
"docid": "add30dc8d14a26eba48dbe5baaaf4169",
"text": "The authors investigated whether intensive musical experience leads to enhancements in executive processing, as has been shown for bilingualism. Young adults who were bilinguals, musical performers (instrumentalists or vocalists), or neither completed 3 cognitive measures and 2 executive function tasks based on conflict. Both executive function tasks included control conditions that assessed performance in the absence of conflict. All participants performed equivalently for the cognitive measures and the control conditions of the executive function tasks, but performance diverged in the conflict conditions. In a version of the Simon task involving spatial conflict between a target cue and its position, bilinguals and musicians outperformed monolinguals, replicating earlier research with bilinguals. In a version of the Stroop task involving auditory and linguistic conflict between a word and its pitch, the musicians performed better than the other participants. Instrumentalists and vocalists did not differ on any measure. Results demonstrate that extended musical experience enhances executive control on a nonverbal spatial task, as previously shown for bilingualism, but also enhances control in a more specialized auditory task, although the effect of bilingualism did not extend to that domain.",
"title": ""
},
{
"docid": "00bf4f81944c1e98e58b891ace95797e",
"text": "Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the l1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovász extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.",
"title": ""
},
{
"docid": "ef6678881f503c1cec330ddde3e30929",
"text": "Complex queries over high speed data streams often need to rely on approximations to keep up with their input. The research community has developed a rich literature on approximate streaming algorithms for this application. Many of these algorithms produce samples of the input stream, providing better properties than conventional random sampling. In this paper, we abstract the stream sampling process and design a new stream sample operator. We show how it can be used to implement a wide variety of algorithms that perform sampling and sampling-based aggregations. Also, we show how to implement the operator in Gigascope - a high speed stream database specialized for IP network monitoring applications. As an example study, we apply the operator within such an enhanced Gigascope to perform subset-sum sampling which is of great interest for IP network management. We evaluate this implemention on a live, high speed internet traffic data stream and find that (a) the operator is a flexible, versatile addition to Gigascope suitable for tuning and algorithm engineering, and (b) the operator imposes only a small evaluation overhead. This is the first operational implementation we know of, for a wide variety of stream sampling algorithms at line speed within a data stream management system.",
"title": ""
},
{
"docid": "c70317d0dc7850ce1e2a45331aae2988",
"text": "Archaeologists have long claimed the Indus Valley as one of the four literate centers of the early ancient world, complete with long texts written on perishable materials. We demonstrate the impossibility of the lost-manuscript thesis and show that Indus symbols were not even evolving in linguistic directions after at least 600 years of use. Suggestions as to how Indus symbols were used are noted in nonlinguistic symbol systems in the Near East that served key religious, political, and social functions without encoding speech or serving as formal memory aids. Evidence is reviewed that the Harappans’ lack of a true script may have been tied to the role played by their symbols in controlling large multilinguistic populations; parallels are drawn to the later resistance of Brahmin elites to the literate encoding of Vedic sources and to similar phenomena in esoteric traditions outside South Asia. Discussion is provided on some of the political and academic forces that helped sustain the Indusscript myth for over 130 years and on ways in which our findings transform current views of the Indus Valley and of literacy in the ancient world in general. Background of the Indus-script thesis Ever since the first Harappan seal was discovered in 1872-3, it has been nearly universally assumed that Indus inscriptions were tightly bound to language, the grounds of every major decipherment effort (Possehl 1996) and a requirement of writing according to most linguists who specialize in scripts (DeFrancis 1989; Daniels and Bright 1996; Sproat 2000). Extensive efforts have been spent over the past 130 years in attempts to identify the supposed language (or languages) underlying the inscriptions, which are often said to hold the key to understanding India’s earliest civilization (fl. c. 2600 1900 BCE). A partial list of the scripts or languages that have been tied to the inscriptions include Brahmi (ancestor of most modern South Asian scripts), the Chinese Lolo (or Yi) script, Sumerian, Egyptian, proto-Elamite, Altaic, Hittite, protoDravidian, early Indo-Aryan (or even Vedic Sanskrit), proto-Munda, Old Slavic, Easter Island rongorongo, or some lost language or putative Indus lingua franca. Starting in 1877, over a hundred claimed decipherments have made it to print; thorough debunkings of past efforts have 1 Contact information: Steve Farmer, Ph.D., Portola Valley, California, saf@safarmer.com; Richard Sproat, Departments of Linguistics and Electrical and Computer Engineering, the University of Illinois and the Beckman Institute, rws@uiuc.edu; Michael Witzel, Department of Sanskrit and Indian Studies, Harvard University, witzel@fas.harvard.edu. 2 We can leave aside here loose definitions of ‘scripts’ (e.g., Boone and Mignolo 1994: esp. 13 ff.) that include mnemonic systems like Mexican-style ‘picture writing’, Incan khipu, or Iroquois wampum, or early accounting scripts that were not tightly coupled to oral language (Damerow 1999). As noted below, the Indus system cannot be categorized as a ‘script’ even under such broad definitions of the term, since the brevity of the inscriptions alone suggests that they were no more capable of performing extensive mnemonic or accounting functions than of systematically encoding speech. On the multiple uses of the symbols, beyond the comments at the end of this paper, see the extended analysis in Farmer and Weber (forthcoming). FARMER, SPROAT, AND WITZEL 20 not kept new ones from taking their place (Possehl 1996; Witzel and Farmer 2000). Speculation regarding ‘lost’ Indus manuscripts began in the 1920s, when Sir John Marshall and his colleagues created a global sensation by comparing Indus civilization to the high-literate societies of Egypt, Mesopotamia, and Elam (cf. Marshall 1924, 1931; Sayce 1924; Gadd and Smith 1924; Hunter 1929). The view that the Indus Valley was home to a literate civilization has been taken for granted ever since by nearly all historians, linguists, and Indus archaeologists (e.g., Kenoyer 1998; Possehl 2002a). Occasional skepticism on this point is not noted even in passing in book-length critiques of past decipherment efforts (Possehl 1996) or standard reviews of deciphered or undeciphered scripts (Daniels and Bright 1996; Pope 1999; Robinson 2002). So far as most researchers are concerned, the image of a literate Indus Valley is an incontrovertible historical fact. If that image were true, it should be noted, given the vast extent of its archaeological ruins, the Indus civilization would have qualified as the largest literate society in the early ancient world — underlining the importance of the Indus-script story not only for ancient Indian history, but for human history as a whole. Dravidian and Indo-Aryan models International expectations that a scientific decipherment was at hand reached their heights in the late 1960s, when a high-profile Soviet research team led by Yuri Knorozov, whose early work led to the later decipherment of Mayan, and a team of Finnish linguists and computer scientists led by the Indologist Asko Parpola, independently claimed that computer analyses of Indus sign positions had “proven” that the inscriptions encoded some early form of Dravidian (Knorozov 1965, 1968; Parpola, Koskenniemi, Parpola, and Aalto 1969), ancestor of over two dozen languages whose modern use is mainly restricted to central and southern India. The early Finnish announcements, which were much bolder than those of the Soviets, were accompanied by sample decipherments and claims that the “secret of the Indus script” or Indus “code” had been broken (Parpola, Koskenniemi, Parpola, and Aalto 1969: 50; Parpola 1970: 91). The appeal of this solution to Dravidian nationalists, the novelty in the 60s of computer linguistics, and fresh memories of the role played by sign positions in deciphering Linear B made the Dravidian thesis the dominant model of the inscriptions for the next three decades. It is easy in retrospect to spot the flaws in those claims: statistical regularities in sign positions show up in nearly all symbol systems, not just those that encode speech; moreover, third-millennium scripts typically omitted so much phonetic, grammatical, and semantic data, and used the same signs in so many varied (or ‘polyvalent’) ways, that even when we are certain that a body of signs encoded speech, it is impossible to identify the underlying language solely from such positional data. Conversely, by exploiting the many degrees of freedom in the ways that speech maps to scripts, it is possible by inventing enough rules as you go to generate half-convincing pseudo-decipherments of any set of ancient signs into any language — even when those signs did not encode language in the first place. The absurdity of this method 3 Even John Chadwick, Michael Ventris’ collaborator in deciphering Linear B, was briefly convinced by the Finnish announcements, whose effects on later Indus studies cannot be overemphasized; see Clauson and Chadwick (1969, 1970). Ironically, Walter Fairservis, who at the time came close to being the first major researcher to abandon linguistic views of the inscriptions (see Fairservis 1971: 282), was apparently converted by those announcements, and in the last 20 years of his life became one of the most extreme of would-be decipherers. Cf. Fairservis 1971: 282; 1987; 1992 and the summary discussion at http://www.safarmer.com/indus/fairservis.html. THE COLLAPSE OF THE INDUS-SCRIPT THESIS 21 only becomes obvious when it is extended to large bodies of inscriptions, and the number of required rules reaches astronomical levels; hence the tendency of claimed decipherments to provide only ‘samples’ of their results, prudently restricting the number of rules to outwardly plausible levels. The subtleties of the speech-to-text mapping problem are illustrated by the long line of world-famous linguists and archaeologists, from Cunningham and Terrien de Lacouperie in the nineteenth century to Hrozn ̆ (the chief decipherer of Hittite) and Fairservis in the twentieth, who convinced themselves over long periods that they had successfully deciphered the system — in over a half dozen different languages. It should finally be noted that claimed ‘positionalstatistical regularities’ in Indus inscriptions, which have played a key role in the Indus-script thesis since G.R. Hunter’s 1929 doctoral thesis, have been grossly exaggerated, and can only be maintained by ignoring or rationalizing countless exceptions to the claimed rules. The failure of the Dravidian model to generate verifiable linguistic readings of a single Indus sign has renewed claims in the last two decades that the inscriptions encoded some early form of Indo-Aryan or even Vedic Sanskrit (cf., e.g., Rao 1982; Kak 1988; Jha and Rajaram 2000), reviving a thesis that can be traced to the first attempt to decipher an Indus seal (Cunningham 1877). One corollary of recent versions of these claims is the suggestion that Indo-Aryan was native to India and not a later import from Central Asia, as historical linguists have argued for over 150 years on the basis of sound changes, word lending, and related developments in Central Asian, Iranian, and Indian languages (for recent discussions, see Witzel 1999, 2003). A second corollary, 4 On Lacouperie, who introduced the first faked evidence into the Indus-script story, see Farmer 2003. On other forgeries, most importantly Rajaram’s infamous ‘horse seal’, see Witzel and Farmer 2000. 5 Recent variations of these claims, which lay at the center of the Soviet and Finnish ‘decipherments’, show up in Mahadevan 1986, Wells 1999, and many others. Two points here merit special comment. The first is that positional regularities in Indus inscriptions are similar to those seen in countless non-linguistic sign systems, including the Near Eastern emblem systems discussed later and even modern highway and airport signs displaying multiple icons (for illustrations, see Farmer 2004a: 17-8). Similar com",
"title": ""
},
{
"docid": "c9972414881db682c219d69d59efa34a",
"text": "“Employee turnover” as a term is widely used in business circles. Although several studies have been conducted on this topic, most of the researchers focus on the causes of employee turnover. This research looked at extent of influence of various factors on employee turnover in urban and semi urban banks. The research was aimed at achieving the following objectives: identify the key factors of employee turnover; determine the extent to which the identified factors are influencing employees’ turnover. The study is based on the responses of the employees of leading banks. A self-developed questionnaire, measured on a Likert Scale was used to collect data from respondents. Quantitative research design was used and this design was chosen because its findings are generaliseable and data objective. The reliability of the data collected is done by split half method.. The collected data were being analyzed using a program called Statistical Package for Social Science (SPSS ver.16.0 For Windows). The data analysis is carried out by calculating mean, standard deviation and linear correlation. The difference between means of variable was estimated by using t-test. The following factors have significantly influenced employee turnover in banking sector: Work Environment, Job Stress, Compensation (Salary), Employee relationship with management, Career Growth.",
"title": ""
},
{
"docid": "5293dc28da110096fee7be1da7bf52b2",
"text": "The function of brown adipose tissue is to transfer energy from food into heat; physiologically, both the heat produced and the resulting decrease in metabolic efficiency can be of significance. Both the acute activity of the tissue, i.e., the heat production, and the recruitment process in the tissue (that results in a higher thermogenic capacity) are under the control of norepinephrine released from sympathetic nerves. In thermoregulatory thermogenesis, brown adipose tissue is essential for classical nonshivering thermogenesis (this phenomenon does not exist in the absence of functional brown adipose tissue), as well as for the cold acclimation-recruited norepinephrine-induced thermogenesis. Heat production from brown adipose tissue is activated whenever the organism is in need of extra heat, e.g., postnatally, during entry into a febrile state, and during arousal from hibernation, and the rate of thermogenesis is centrally controlled via a pathway initiated in the hypothalamus. Feeding as such also results in activation of brown adipose tissue; a series of diets, apparently all characterized by being low in protein, result in a leptin-dependent recruitment of the tissue; this metaboloregulatory thermogenesis is also under hypothalamic control. When the tissue is active, high amounts of lipids and glucose are combusted in the tissue. The development of brown adipose tissue with its characteristic protein, uncoupling protein-1 (UCP1), was probably determinative for the evolutionary success of mammals, as its thermogenesis enhances neonatal survival and allows for active life even in cold surroundings.",
"title": ""
},
{
"docid": "ddef188a971d53c01d242bb9198eac10",
"text": "State-of-the-art slot filling models for goal-oriented human/machine conversational language understanding systems rely on deep learning methods. While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems. This paper proposes a deep learning based approach that can utilize only the slot description in context without the need for any labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The main idea of this paper is to leverage the encoding of the slot names and descriptions within a multi-task deep learned slot filling model, to implicitly align slots across domains. The proposed approach is promising for solving the domain scaling problem and eliminating the need for any manually annotated data or explicit schema alignment. Furthermore, our experiments on multiple domains show that this approach results in significantly better slot-filling performance when compared to using only in-domain data, especially in the low data regime.",
"title": ""
},
{
"docid": "6520be1becd7e446b24ecb2fae6b1d50",
"text": "Neural networks in their modern deep learning incarnation have achieved state of the art performance on a wide variety of tasks and domains. A core intuition behind these methods is that they learn layers of features which interpolate between two domains in a series of related parts. The first part of this thesis introduces the building blocks of neural networks for computer vision. It starts with linear models then proceeds to deep multilayer perceptrons and convolutional neural networks, presenting the core details of each. However, the introduction also focuses on intuition by visualizing concrete examples of the parts of a modern network. The second part of this thesis investigates regularization of neural networks. Methods like dropout and others have been proposed to favor certain (empirically better) solutions over others. However, big deep neural networks still overfit very easily. This section proposes a new regularizer called DeCov, which leads to significantly reduced overfitting (difference between train and val performance) and greater generalization, sometimes better than dropout and other times not. The regularizer is based on the cross-covariance of hidden representations and takes advantage of the intuition that different features should try to represent different things, an intuition others have explored with similar losses. Experiments across a range of datasets and network architectures demonstrate reduced overfitting due to DeCov while almost always maintaining or increasing generalization performance and often improving performance over dropout.",
"title": ""
},
{
"docid": "5d76b2578fa2aa05a607ab0a542ab81f",
"text": "60 A practical approach to the optimal design of precast, prestressed concrete highway bridge girder systems is presented. The approach aims at standardizing the optimal design of bridge systems, as opposed to standardizing girder sections. Structural system optimization is shown to be more relevant than conventional girder optimization for an arbitrarily chosen structural system. Bridge system optimization is defined as the optimization of both longitudinal and transverse bridge configurations (number of spans, number of girders, girder type, reinforcements and tendon layout). As a result, the preliminary design process is much simplified by using some developed design charts from which selection of the optimum bridge system, number and type of girders, and amounts of prestressed and non-prestressed reinforcements are easily obtained for a given bridge length, width and loading type.",
"title": ""
},
{
"docid": "f8a6b721f99e54db0c4c81b9713aae78",
"text": "In this paper, a new bridgeless single-ended primary inductance converter power-factor-correction rectifier is introduced. The proposed circuit provides lower conduction losses with reduced components simultaneously. In conventional PFC converters (continuous-conduction-mode boost converter), a voltage loop and a current loop are required for PFC. In the proposed converter, the control circuit is simplified, and no current loop is required while the converter operates in discontinuous conduction mode. Theoretical analysis and simulation results are provided to explain circuit operation. A prototype of the proposed converter is realized, and the results are presented. The measured efficiency shows 1% improvement in comparison to conventional SEPIC rectifier.",
"title": ""
},
{
"docid": "edb4dc74a01c160896ed2cdd766f621d",
"text": "Radar mounted onboard micro-UAV is an early stage technology and its potentiality is far from being focused, even if radar sensors having costs compatible with micro-UAV are currently developed. As a contribution to this topic, this paper describes a radar-equipped hexacopter assembled thanks to complementary skills available at IREA and DII. In order to test the operation mode of the system as well as to investigate its target detection and localization capabilities, a feasibility experiment has been carried out in December 2016. The results of this flight campaign are presented, in terms of both raw data and images obtained by means of an ad-hoc data processing approach. These results provide an encouraging preliminary proof of the achievable outcomes.",
"title": ""
},
{
"docid": "81126b57a29b4c9aee46ecb04c7f43ca",
"text": "Within the field of bibliometrics, there is sustained interest in how nations “compete” in terms of academic disciplines, and what determinants explain why countries may have a specific advantage in one discipline over another. However, this literature has not, to date, presented a comprehensive structured model that could be used in the interpretation of a country’s research profile and aca‐ demic output. In this paper, we use frameworks from international business and economics to pre‐ sent such a model. Our study makes four major contributions. First, we include a very wide range of countries and disci‐ plines, explicitly including the Social Sciences, which unfortunately are excluded in most bibliometrics studies. Second, we apply theories of revealed comparative advantage and the competitive ad‐ vantage of nations to academic disciplines. Third, we cluster our 34 countries into five different groups that have distinct combinations of revealed comparative advantage in five major disciplines. Finally, based on our empirical work and prior literature, we present an academic diamond that de‐ tails factors likely to explain a country’s research profile and competitiveness in certain disciplines.",
"title": ""
},
{
"docid": "053307c8b892dbb919aa439b40b0326d",
"text": "One of the principal objectives of traffic accident analyses is to identify key factors that affect the severity of an accident. However, with the presence of heterogeneity in the raw data used, the analysis of traffic accidents becomes difficult. In this paper, Latent Class Cluster (LCC) is used as a preliminary tool for segmentation of 3229 accidents on rural highways in Granada (Spain) between 2005 and 2008. Next, Bayesian Networks (BNs) are used to identify the main factors involved in accident severity for both, the entire database (EDB) and the clusters previously obtained by LCC. The results of these cluster-based analyses are compared with the results of a full-data analysis. The results show that the combined use of both techniques is very interesting as it reveals further information that would not have been obtained without prior segmentation of the data. BN inference is used to obtain the variables that best identify accidents with killed or seriously injured. Accident type and sight distance have been identify in all the cases analysed; other variables such as time, occupant involved or age are identified in EDB and only in one cluster; whereas variables vehicles involved, number of injuries, atmospheric factors, pavement markings and pavement width are identified only in one cluster.",
"title": ""
},
{
"docid": "e5f0bca200dc4ef5a806feb06b4cf2a4",
"text": "Supply chain finance is a new financing model that makes the industry chain as an organic whole chain to develop financing services. Its purpose is to combine with financial institutions, companies and third-party logistics companies to achieve win-win situation. The supply chain is designed to maximize the financial value. The supply chain finance business in our country is still in its early stages. Conducting the research on risk assessment and control of the supply chain finance business has an important significance for the promotion of the development of our country supply chain finance business. The paper investigates the dynamic multiple attribute decision making problems, in which the decision information, provided by decision makers at different periods, is expressed in intuitionistic fuzzy numbers. We first develop one new aggregation operators called dynamic intuitionistic fuzzy Hamacher weighted averaging (DIFHWA) operator. Moreover, a procedure based on the DIFHWA and IFHWA operators is developed to solve the dynamic multiple attribute decision making problems where all the decision information about attribute values takes the form of intuitionistic fuzzy numbers collected at different periods. Finally, an illustrative example for risk assessment of supply chain finance is given to verify the developed approach and to demonstrate its practicality and effectiveness.",
"title": ""
}
] |
scidocsrr
|
ba2a7bbe0a994f231852664797ff7e97
|
Autonomous vehicles control in the VisLab Intercontinental Autonomous Challenge
|
[
{
"docid": "5e9dce428a2bcb6f7bc0074d9fe5162c",
"text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.",
"title": ""
}
] |
[
{
"docid": "0b6c8d79180a4a17d4da661d6ab0b983",
"text": "The online social media such as Facebook, Twitter and YouTube has been used extensively during disaster and emergency situation. Despite the advantages offered by these services on supplying information in vague situation by citizen, we raised the issue of spreading misinformation on Twitter by using retweets. Accordingly, in this study, we conduct a user survey (n = 133) to investigate what is the user’s action towards spread message in Twitter, and why user decide to perform retweet on the spread message. As the result of the factor analyses, we extracted 3 factors on user’s action towards spread message which are: 1) Desire to spread the retweet messages as it is considered important, 2) Mark the retweet messages as favorite using Twitter “Favorite” function, and 3) Search for further information about the content of the retweet messages. Then, we further analyze why user decides to perform retweet. The results reveal that user has desire to spread the message which they think is important and the reason why they retweet it is because of the need to retweet, interesting tweet content and the tweet user. The results presented in this paper provide an understanding on user behavior of information diffusion, with the aim to reduce the spread of misinformation using Twitter during emergency situation.",
"title": ""
},
{
"docid": "a880d38d37862b46dc638b9a7e45b6ee",
"text": "This paper presents the modeling, simulation, and analysis of the dynamic behavior of a fictitious 2 × 320 MW variable-speed pump-turbine power plant, including a hydraulic system, electrical equipment, rotating inertias, and control systems. The modeling of the hydraulic and electrical components of the power plant is presented. The dynamic performances of a control strategy in generating mode and one in pumping mode are investigated by the simulation of the complete models in the case of change of active power set points. Then, a pseudocontinuous model of the converters feeding the rotor circuits is described. Due to this simplification, the simulation time can be reduced drastically (approximately factor 60). A first validation of the simplified model of the converters is obtained by comparison of the simulated results coming from the simplified and complete models for different modes of operation of the power plant. Experimental results performed on a 2.2-kW low-power test bench are also compared with the simulated results coming from both complete and simplified models related to this case and confirm the validity of the proposed simplified approach for the converters.",
"title": ""
},
{
"docid": "ee40d2e4a049f61a2c2b7eee2a2a98ae",
"text": "In Analog to digital convertor design converter, high speed comparator influences the overall performance of Flash/Pipeline Analog to Digital Converter (ADC) directly. This paper presents the schematic design of a CMOS comparator with high speed, low noise and low power dissipation. A schematic design of this comparator is given with 0.18μm TSMC Technology and simulated in cadence environment. Simulation results are presented and it shows that this design can work under high speed clock frequency 100MHz. The design has a low offset voltage 280.7mv, low power dissipation 0.37 mw and low noise 6.21μV.",
"title": ""
},
{
"docid": "4560fd4f946a5b31693591977ca11207",
"text": "Editors’ abstract. Middle East Arab terrorists are on the cutting edge of organizational networking and stand to gain significantly from the information revolution. They can harness information technology to enable less hierarchical, more networked designs—enhancing their flexibility, responsiveness, and resilience. In turn, information technology can enhance their offensive operational capabilities for the war of ideas as well as for the war of violent acts. Zanini and Edwards (both at RAND) focus their analysis primarily on Middle East terrorism but also discuss other groups around the world. They conclude with a series of recommendations for policymakers. This chapter draws on RAND research originally reported in Ian Lesser et al., Countering the New Terrorism (1999).",
"title": ""
},
{
"docid": "601d9060ac35db540cdd5942196db9e0",
"text": "In this paper, we review nine visualization techniques that can be used for visual exploration of multidimensional financial data. We illustrate the use of these techniques by studying the financial performance of companies from the pulp and paper industry. We also illustrate the use of visualization techniques for detecting multivariate outliers, and other patterns in financial performance data in the form of clusters, relationships, and trends. We provide a subjective comparison between different visualization techniques as to their capabilities for providing insight into financial performance data. The strengths of each technique and the potential benefits of using multiple visualization techniques for gaining insight into financial performance data are highlighted.",
"title": ""
},
{
"docid": "aedeb977109fd18ef3dd471b80e40fc1",
"text": "Business process modeling has undoubtedly emerged as a popular and relevant practice in Information Systems. Despite being an actively researched field, anecdotal evidence and experiences suggest that the focus of the research community is not always well aligned with the needs of industry. The main aim of this paper is, accordingly, to explore the current issues and the future challenges in business process modeling, as perceived by three key stakeholder groups (academics, practitioners, and tool vendors). We present the results of a global Delphi study with these three groups of stakeholders, and discuss the findings and their implications for research and practice. Our findings suggest that the critical areas of concern are standardization of modeling approaches, identification of the value proposition of business process modeling, and model-driven process execution. These areas are also expected to persist as business process modeling roadblocks in the future.",
"title": ""
},
{
"docid": "a1bd6742011302d35527cdbad73a82a3",
"text": "The Semantic Web contains an enormous amount of information in the form of knowledge bases (KB). To make this information available, many question answering (QA) systems over KBs were created in the last years. Building a QA system over KBs is difficult because there are many different challenges to be solved. In order to address these challenges, QA systems generally combine techniques from natural language processing, information retrieval, machine learning and Semantic Web. The aim of this survey is to give an overview of the techniques used in current QA systems over KBs. We present the techniques used by the QA systems which were evaluated on a popular series of benchmarks: Question Answering over Linked Data. Techniques that solve the same task are first grouped together and then described. The advantages and disadvantages are discussed for each technique. This allows a direct comparison of similar techniques. Additionally, we point to techniques that are used over WebQuestions and SimpleQuestions, which are two other popular benchmarks for QA systems.",
"title": ""
},
{
"docid": "3ce03df4e5faa4132b2e791833549525",
"text": "Cardiac left ventricle (LV) quantification is among the most clinically important tasks for identification and diagnosis of cardiac diseases, yet still a challenge due to the high variability of cardiac structure and the complexity of temporal dynamics. Full quantification, i.e., to simultaneously quantify all LV indices including two areas (cavity and myocardium), six regional wall thicknesses (RWT), three LV dimensions, and one cardiac phase, is even more challenging since the uncertain relatedness intra and inter each type of indices may hinder the learning procedure from better convergence and generalization. In this paper, we propose a newly-designed multitask learning network (FullLVNet), which is constituted by a deep convolution neural network (CNN) for expressive feature embedding of cardiac structure; two followed parallel recurrent neural network (RNN) modules for temporal dynamic modeling; and four linear models for the final estimation. During the final estimation, both intraand inter-task relatedness are modeled to enforce improvement of generalization: (1) respecting intra-task relatedness, group lasso is applied to each of the regression tasks for sparse and common feature selection and consistent prediction; (2) respecting inter-task relatedness, three phase-guided constraints are proposed to penalize violation of the temporal behavior of the obtained LV indices. Experiments on MR sequences of 145 subjects show that FullLVNet achieves high accurate prediction with our intraand inter-task relatedness, leading to MAE of 190 mm, 1.41 mm, 2.68 mm for average areas, RWT, dimensions and error rate of 10.4% for the phase classification. This endows our method a great potential in comprehensive clinical assessment of global, regional and dynamic cardiac function.",
"title": ""
},
{
"docid": "0506949c45febe7ce99e3f37cd7edcf2",
"text": "Present study demonstrated that fibrillar β-amyloid peptide (fAβ1-42) induced ATP release, which in turn activated NADPH oxidase via the P2X7 receptor (P2X7R). Reactive oxygen species (ROS) production in fAβ1-42-treated microglia appeared to require Ca2+ influx from extracellular sources, because ROS generation was abolished to control levels in the absence of extracellular Ca2+. Considering previous observation of superoxide generation by Ca2+ influx through P2X7R in microglia, we hypothesized that ROS production in fAβ-stimulated microglia might be mediated by ATP released from the microglia. We therefore examined whether fAβ1-42-induced Ca2+ influx was mediated through P2X7R activation. In serial experiments, we found that microglial pretreatment with the P2X7R antagonists Pyridoxal-phosphate-6-azophenyl-2',4'- disulfonate (100 µM) or oxidized ATP (100 µM) inhibited fAβ-induced Ca2+ influx and reduced ROS generation to basal levels. Furthermore, ATP efflux from fAβ1-42-stimulated microglia was observed, and apyrase treatment decreased the generation of ROS. These findings provide conclusive evidence that fAβ-stimulated ROS generation in microglial cells is regulated by ATP released from the microglia in an autocrine manner.",
"title": ""
},
{
"docid": "065417a0c2e82cbd33798de1be98042f",
"text": "Deep neural networks usually require large labeled datasets to construct accurate models; however, in many real-world scenarios, such as medical image segmentation, labeling data are a time-consuming and costly human (expert) intelligent task. Semi-supervised methods leverage this issue by making use of a small labeled dataset and a larger set of unlabeled data. In this paper, we present a flexible framework for semi-supervised learning that combines the power of supervised methods that learn feature representations using state-of-the-art deep convolutional neural networks with the deeply embedded clustering algorithm that assigns data points to clusters based on their probability distributions and feature representations learned by the networks. Our proposed semi-supervised learning algorithm based on deeply embedded clustering (SSLDEC) learns feature representations via iterations by alternatively using labeled and unlabeled data points and computing target distributions from predictions. During this iterative procedure, the algorithm uses labeled samples to keep the model consistent and tuned with labeling, as it simultaneously learns to improve feature representation and predictions. The SSLDEC requires a few hyper-parameters and thus does not need large labeled validation sets, which addresses one of the main limitations of many semi-supervised learning algorithms. It is also flexible and can be used with many state-of-the-art deep neural network configurations for image classification and segmentation tasks. To this end, we implemented and tested our approach on benchmark image classification tasks as well as in a challenging medical image segmentation scenario. In benchmark classification tasks, the SSLDEC outperformed several state-of-the-art semi-supervised learning methods, achieving 0.46% error on MNIST with 1000 labeled points and 4.43% error on SVHN with 500 labeled points. In the iso-intense infant brain MRI tissue segmentation task, we implemented SSLDEC on a 3D densely connected fully convolutional neural network where we achieved significant improvement over supervised-only training as well as a semi-supervised method based on pseudo-labeling. Our results show that the SSLDEC can be effectively used to reduce the need for costly expert annotations, enhancing applications, such as automatic medical image segmentation.",
"title": ""
},
{
"docid": "874dd5c2b3b3edc0d13aac33b60da21f",
"text": "Firefighters suffer a variety of life-threatening risks, including line-of-duty deaths, injuries, and exposures to hazardous substances. Support for reducing these risks is important. We built a partially occluded object reconstruction method on augmented reality glasses for first responders. We used a deep learning based on conditional generative adversarial networks to train associations between the various images of flammable and hazardous objects and their partially occluded counterparts. Our system then reconstructed an image of a new flammable object. Finally, the reconstructed image was superimposed on the input image to provide \"transparency\". The system imitates human learning about the laws of physics through experience by learning the shape of flammable objects and the flame characteristics.",
"title": ""
},
{
"docid": "ed929cce16774307d93719f50415e138",
"text": "BACKGROUND\nMore than one in five patients who undergo treatment for breast cancer will develop breast cancer-related lymphedema (BCRL). BCRL can occur as a result of breast cancer surgery and/or radiation therapy. BCRL can negatively impact comfort, function, and quality of life (QoL). Manual lymphatic drainage (MLD), a type of hands-on therapy, is frequently used for BCRL and often as part of complex decongestive therapy (CDT). CDT is a fourfold conservative treatment which includes MLD, compression therapy (consisting of compression bandages, compression sleeves, or other types of compression garments), skin care, and lymph-reducing exercises (LREs). Phase 1 of CDT is to reduce swelling; Phase 2 is to maintain the reduced swelling.\n\n\nOBJECTIVES\nTo assess the efficacy and safety of MLD in treating BCRL.\n\n\nSEARCH METHODS\nWe searched Medline, EMBASE, CENTRAL, WHO ICTRP (World Health Organization's International Clinical Trial Registry Platform), and Cochrane Breast Cancer Group's Specialised Register from root to 24 May 2013. No language restrictions were applied.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials (RCTs) or quasi-RCTs of women with BCRL. The intervention was MLD. The primary outcomes were (1) volumetric changes, (2) adverse events. Secondary outcomes were (1) function, (2) subjective sensations, (3) QoL, (4) cost of care.\n\n\nDATA COLLECTION AND ANALYSIS\nWe collected data on three volumetric outcomes. (1) LE (lymphedema) volume was defined as the amount of excess fluid left in the arm after treatment, calculated as volume in mL of affected arm post-treatment minus unaffected arm post-treatment. (2) Volume reduction was defined as the amount of fluid reduction in mL from before to after treatment calculated as the pretreatment LE volume of the affected arm minus the post-treatment LE volume of the affected arm. (3) Per cent reduction was defined as the proportion of fluid reduced relative to the baseline excess volume, calculated as volume reduction divided by baseline LE volume multiplied by 100. We entered trial data into Review Manger 5.2 (RevMan), pooled data using a fixed-effect model, and analyzed continuous data as mean differences (MDs) with 95% confidence intervals (CIs). We also explored subgroups to determine whether mild BCRL compared to moderate or severe BCRL, and BCRL less than a year compared to more than a year was associated with a better response to MLD.\n\n\nMAIN RESULTS\nSix trials were included. Based on similar designs, trials clustered in three categories.(1) MLD + standard physiotherapy versus standard physiotherapy (one trial) showed significant improvements in both groups from baseline but no significant between-groups differences for per cent reduction.(2) MLD + compression bandaging versus compression bandaging (two trials) showed significant per cent reductions of 30% to 38.6% for compression bandaging alone, and an additional 7.11% reduction for MLD (MD 7.11%, 95% CI 1.75% to 12.47%; two RCTs; 83 participants). Volume reduction was borderline significant (P = 0.06). LE volume was not significant. Subgroup analyses was significant showing that participants with mild-to-moderate BCRL were better responders to MLD than were moderate-to-severe participants.(3) MLD + compression therapy versus nonMLD treatment + compression therapy (three trials) were too varied to pool. One of the trials compared compression sleeve plus MLD to compression sleeve plus pneumatic pump. Volume reduction was statistically significant favoring MLD (MD 47.00 mL, 95% CI 15.25 mL to 78.75 mL; 1 RCT; 24 participants), per cent reduction was borderline significant (P=0.07), and LE volume was not significant. A second trial compared compression sleeve plus MLD to compression sleeve plus self-administered simple lymphatic drainage (SLD), and was significant for MLD for LE volume (MD -230.00 mL, 95% CI -450.84 mL to -9.16 mL; 1 RCT; 31 participants) but not for volume reduction or per cent reduction. A third trial of MLD + compression bandaging versus SLD + compression bandaging was not significant (P = 0.10) for per cent reduction, the only outcome measured (MD 11.80%, 95% CI -2.47% to 26.07%, 28 participants).MLD was well tolerated and safe in all trials.Two trials measured function as range of motion with conflicting results. One trial reported significant within-groups gains for both groups, but no between-groups differences. The other trial reported there were no significant within-groups gains and did not report between-groups results. One trial measured strength and reported no significant changes in either group.Two trials measured QoL, but results were not usable because one trial did not report any results, and the other trial did not report between-groups results.Four trials measured sensations such as pain and heaviness. Overall, the sensations were significantly reduced in both groups over baseline, but with no between-groups differences. No trials reported cost of care.Trials were small ranging from 24 to 45 participants. Most trials appeared to randomize participants adequately. However, in four trials the person measuring the swelling knew what treatment the participants were receiving, and this could have biased results.\n\n\nAUTHORS' CONCLUSIONS\nMLD is safe and may offer additional benefit to compression bandaging for swelling reduction. Compared to individuals with moderate-to-severe BCRL, those with mild-to-moderate BCRL may be the ones who benefit from adding MLD to an intensive course of treatment with compression bandaging. This finding, however, needs to be confirmed by randomized data.In trials where MLD and sleeve were compared with a nonMLD treatment and sleeve, volumetric outcomes were inconsistent within the same trial. Research is needed to identify the most clinically meaningful volumetric measurement, to incorporate newer technologies in LE assessment, and to assess other clinically relevant outcomes such as fibrotic tissue formation.Findings were contradictory for function (range of motion), and inconclusive for quality of life.For symptoms such as pain and heaviness, 60% to 80% of participants reported feeling better regardless of which treatment they received.One-year follow-up suggests that once swelling had been reduced, participants were likely to keep their swelling down if they continued to use a custom-made sleeve.",
"title": ""
},
{
"docid": "57bedff3c51ef07f17aa7dde32e2e2a2",
"text": "We present FaceTouch, a novel interaction concept for mobile Virtual Reality (VR) head-mounted displays (HMDs) that leverages the backside as a touch-sensitive surface. With FaceTouch, the user can point at and select virtual content inside their field-of-view by touching the corresponding location at the backside of the HMD utilizing their sense of proprioception. This allows for rich interaction (e.g. gestures) in mobile and nomadic scenarios without having to carry additional accessories (e.g. a gamepad). We built a prototype of FaceTouch and conducted two user studies. In the first study we measured the precision of FaceTouch in a display-fixed target selection task using three different selection techniques showing a low error rate of 2% indicate the viability for everyday usage. To asses the impact of different mounting positions on the user performance we conducted a second study. We compared three mounting positions of the touchpad (face, hand and side) showing that mounting the touchpad at the back of the HMD resulted in a significantly lower error rate, lower selection time and higher usability. Finally, we present interaction techniques and three example applications that explore the FaceTouch design space.",
"title": ""
},
{
"docid": "a16a66d4eac400a328b7ea7276d37ed4",
"text": "In this paper, we analyze the impact of Layout Dependent Effect (LDE) observed on MOSFETs. It is shown that changing the Layout have an impact on MOSFET device parameters and reliability. Here, we studied the Well Proximity Effect (WPE), Length of diffusion (LOD) and Oxide Spacing Effect (OSE) impacts on device MOSFET parameters and reliability. We also analyzed SiGe impacts on LDE, since it is commonly used to boost device performance.",
"title": ""
},
{
"docid": "20ecae219ecf21429fb7c2697339fe50",
"text": "Massively multiplayer game holds a huge market in the digital entertainment industry. Companies invest heavily in the game and graphics development since a successful online game can attract million of users, and this translates to a huge investment payoff. However, multiplayer online game is also subjected to various forms of hacks and cheats. Hackers can alter the graphic rendering to reveal information otherwise be hidden in a normal game, or cheaters can use software robot to play the game automatically and gain an unfair advantage. Currently, some popular online games release software patches or incorporate anti-cheating software to detect known cheats. This not only creates deployment difficulty but new cheats will still be able to breach the normal game logic until software patches are available. Moreover, the anti-cheating software themselves are also vulnerable to hacks. In this paper, we propose a scalable and efficient method to detect whether a player is cheating or not. The methodology is based on the dynamic Bayesian network approach. The detection framework relies solely on the game states and runs in the game server only. Therefore it is invulnerable to hacks and it is a much more deployable solution. To demonstrate the effectiveness of the propose method, we implement a prototype multiplayer game system and to detect whether a player is using the “aiming robot” for cheating or not. Experiments show that not only we can effectively detect cheaters, but the false positive rate is extremely low. We believe the proposed methodology and the prototype system provide a first step toward a systematic study of cheating detection and security research in the area of online multiplayer games.",
"title": ""
},
{
"docid": "ac559a0d26723632be3a8e7e8ecadccc",
"text": "The success of Ambient Intelligence (AmI) will depend on how secure it can be made, how privacy and other rights of individuals can be protected and how individuals can come to trust the intelligent world that surrounds them and through which they move. This article addresses these issues by analysing scenarios for ambient intelligence applications that have been developed over the last few years. It elaborates the assumptions that promotors make about the likely use of the technology and possibly unwanted side effects. It concludes with a number of threats for personal privacy that become evident. c © 2005 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3fb1a4ed34309cb02052ac6b0900d178",
"text": "Password managers protect users' passwords by using a master password or a security token. The security of them using the master password is weakened if users use weak master passwords. The usability of them using the security token is low since users always need the token to log in. In this paper, we propose a new framework of password managers which has a high security and a high usability by employing the secret sharing and a personal servers for a user.",
"title": ""
},
{
"docid": "091cd37683d7e1b8ceef19b4042f4ac3",
"text": "Closed or nearly closed regions are an important form of perceptual structure arising both in natural imagery and in many forms of human-created imagery including sketches, line art, graphics, and formal drawings. This paper presents an effective algorithm especially suited for finding perceptually salient, compact closed region structure in hand-drawn sketches and line art. We start with a graph of curvilinear fragments whose proximal endpoints form junctions. The key problem is to manage the search of possible path continuations through junctions in an effort to find paths satisfying global criteria for closure and figural salience. We identify constraints particular to this domain for ranking path continuations through junctions, based on observations of the ways that junctions arise in line drawings. In particular, we delineate the roles of the principle of good continuation versus maximally turning paths. Best-first bidirectional search checks for the cleanest, most obvious paths first, then reverts to more exhaustive search to find paths cluttered by blind alleys. Results are demonstrated on line drawings from several sources including line art, engineering drawings, sketches on whiteboards, as well as contours from photographic imagery.",
"title": ""
},
{
"docid": "ffd2ebf7cf5b4074d6dcb796785af24e",
"text": "Due to the growing need for computer applications capable of detecting the emotional state of the users [1], studying emotions in informatics has increased. The direct options of detecting the emotions are inquiries and questionnaires with specific questions which participants answer on the Likert scale. Because of the fact that every participant has to answer all the questions and those need to be manually evaluated, it is not a very efficient method. That is the reason for inventing new methods for classifying emotions for example through physiological responses. Motivated by every day interaction among humans, a great part of the research in this area has explored detecting emotions from facial and voice information. One of the available software solutions is Noldus FaceReader, which can recognize six emotional states: joy, sadness, anger, surprise, fear, disgust, and a neutral state. However, it depends on good light conditions and the accuracy could be also decreased by an object covering part of a participant’s face, e.g., glasses. In order to address these shortcomings, other approaches to detect emotions have been proposed which focus on different physiological information, such as heart rate, skin conductance, and pupil dilation [2]. A still relatively new field of research in affective brain-computer interaction attempts to detect emotions using electroencephalograms (EEGs) [3]. In our approach, we aim to evaluate EEG devices Emotiv EPOC and Emotiv Insight and classify emotions from the data captured by this devices. Our method is based on the method was used by the psychologists. In order to represent emotions we use dimensional approach [4] which is based on the fact that all subjective feelings could be projected into the 3D space where dimensions are: (i) arousal – positive/negative emotion, (ii) valence – strong/weak emotion, and (iii) tension – tensed/relieved emotion. We omit the third dimension due to the difficulty of determining the amount of tension. When classifying emotions by this method, respondents identify how positive (valence) and how strong (arousal) was their emotion. These values are projected to 2D space, called Valence–Arousal model, which could be divided to four quadrants: strong-",
"title": ""
},
{
"docid": "f1eb96dd2109aad21ac1bccfe8dcd012",
"text": "In imitation learning, an agent learns how to behave in an environment with an unknown cost function by mimicking expert demonstrations. Existing imitation learning algorithms typically involve solving a sequence of planning or reinforcement learning problems. Such algorithms are therefore not directly applicable to large, high-dimensional environments, and their performance can significantly degrade if the planning problems are not solved to optimality. Under the apprenticeship learning formalism, we develop alternative model-free algorithms for finding a parameterized stochastic policy that performs at least as well as an expert policy on an unknown cost function, based on sample trajectories from the expert. Our approach, based on policy gradients, scales to large continuous environments with guaranteed convergence to local minima.",
"title": ""
}
] |
scidocsrr
|
1717457b0dd7bb90fad6769a0a582416
|
A Novel Compact Torsional Spring for Series Elastic Actuators for Assistive Wearable Robots
|
[
{
"docid": "7dcdad7b525dcc74f9333ab04e643c80",
"text": "BACKGROUND\na large proportion of falls in older people occur when walking; however the mechanisms underlying impaired balance during gait are poorly understood.\n\n\nOBJECTIVE\nto evaluate acceleration patterns at the head and pelvis in young and older subjects when walking on a level and an irregular walking surface, in order to develop an understanding of how ageing affects postural responses to challenging walking conditions.\n\n\nMETHODS\ntemporo-spatial gait parameters and variables derived from acceleration signals were recorded in 30 young people aged 22-39 years (mean 29.0, SD 4.3), and 30 older people with a low risk of falling aged 75-85 years (mean 79.0, SD 3.0) while walking on a level and an irregular walking surface. Subjects also underwent tests of vision, sensation, strength, reaction time and balance.\n\n\nRESULTS\nolder subjects exhibited a more conservative gait pattern, characterised by reduced velocity, shorter step length and increased step timing variability. These differences were particularly pronounced when walking on the irregular surface. The magnitude of accelerations at the head and pelvis were generally smaller in older subjects; however the smoothness of the acceleration signals did not differ between the two groups. Older subjects performed worse on tests of vision, peripheral sensation, strength, reaction time and balance.\n\n\nCONCLUSION\nthe adoption of a more conservative basic gait pattern by older people with a low risk of falling reduces the magnitude of accelerations experienced by the head and pelvis when walking, which is likely to be a compensatory strategy to maintain balance in the presence of age-related deficits in physiological function, particularly reduced lower limb strength.",
"title": ""
}
] |
[
{
"docid": "ab157111a39a4f081bdf0126e869f65d",
"text": "As event-related brain potential (ERP) researchers have increased the number of recording sites, they have gained further insights into the electrical activity in the neural networks underlying explicit memory. A review of the results of such ERP mapping studies suggests that there is good correspondence between ERP results and those from brain imaging studies that map hemodynamic changes. This concordance is important because the combination of the high temporal resolution of ERPs with the high spatial resolution of hemodynamic imaging methods will provide a greatly increased understanding of the spatio-temporal dynamics of the brain networks that encode and retrieve explicit memories.",
"title": ""
},
{
"docid": "1e3e52f584863903625a07aabd1517d3",
"text": "Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset.",
"title": ""
},
{
"docid": "e579b056407e01cc42b5d898ab06fd72",
"text": "Convolutional Neural Networks (ConvNets) are a powerful Deep Learning model, providing state-of-the-art accuracy to many emerging classification problems. However, ConvNet classification is a computationally heavy task, suffering from rapid complexity scaling. This paper presents fpgaConvNet, a novel domain-specific modelling framework together with an automated design methodology for the mapping of ConvNets onto reconfigurable FPGA-based platforms. By interpreting ConvNet classification as a streaming application, the proposed framework employs the Synchronous Dataflow (SDF) model of computation as its basis and proposes a set of transformations on the SDF graph that explore the performance-resource design space, while taking into account platform-specific resource constraints. A comparison with existing ConvNet FPGA works shows that the proposed fully-automated methodology yields hardware designs that improve the performance density by up to 1.62× and reach up to 90.75% of the raw performance of architectures that are hand-tuned for particular ConvNets.",
"title": ""
},
{
"docid": "85e5405bdd852741f1af3f89d880805c",
"text": "The automatic assessment of the level of independence of a person, based on the recognition of a set of Activities of Daily Living, is among the most challenging research fields in Ambient Intelligence. The article proposes a framework for the recognition of motion primitives, relying on Gaussian Mixture Modeling and Gaussian Mixture Regression for the creation of activity models. A recognition procedure based on Dynamic Time Warping and Mahalanobis distance is found to: (i) ensure good classification results; (ii) exploit the properties of GMM and GMR modeling to allow for an easy run-time recognition; (iii) enhance the consistency of the recognition via the use of a classifier allowing unknown as an answer.",
"title": ""
},
{
"docid": "347509d68f6efd4da747a7a3e704a9a2",
"text": "Stack Overflow is widely regarded as the most popular Community driven Question Answering (CQA) website for programmers. Questions posted on Stack Overflow which are not related to programming topics, are marked as `closed' by experienced users and community moderators. A question can be `closed' for five reasons -- duplicate, off-topic, subjective, not a real question and too localized. In this work, we present the first study of `closed' questions on Stack Overflow. We download 4 years of publicly available data which contains 3.4 Million questions. We first analyze and characterize the complete set of 0.1 Million `closed' questions. Next, we use a machine learning framework and build a predictive model to identify a `closed' question at the time of question creation.\n One of our key findings is that despite being marked as `closed', subjective questions contain high information value and are very popular with the users. We observe an increasing trend in the percentage of closed questions over time and find that this increase is positively correlated to the number of newly registered users. In addition, we also see a decrease in community participation to mark a `closed' question which has led to an increase in moderation job time. We also find that questions closed with the Duplicate and Off Topic labels are relatively more prone to reputation gaming. Our analysis suggests broader implications for content quality maintenance on CQA websites. For the `closed' question prediction task, we make use of multiple genres of feature sets based on - user profile, community process, textual style and question content. We use a state-of-art machine learning classifier based on an ensemble framework and achieve an overall accuracy of 70.3%. Analysis of the feature space reveals that `closed' questions are relatively less informative and descriptive than non-`closed' questions. To the best of our knowledge, this is the first experimental study to analyze and predict `closed' questions on Stack Overflow.",
"title": ""
},
{
"docid": "425270bbfd1290a0692afeea95fa090f",
"text": "This paper introduces a bounding gait control algorithm that allows a successful implementation of duty cycle modulation in the MIT Cheetah 2. Instead of controlling leg stiffness to emulate a `springy leg' inspired from the Spring-Loaded-Inverted-Pendulum (SLIP) model, the algorithm prescribes vertical impulse by generating scaled ground reaction forces at each step to achieve the desired stance and total stride duration. Therefore, we can control the duty cycle: the percentage of the stance phase over the entire cycle. By prescribing the required vertical impulse of the ground reaction force at each step, the algorithm can adapt to variable duty cycles attributed to variations in running speed. Following linear momentum conservation law, in order to achieve a limit-cycle gait, the sum of all vertical ground reaction forces must match vertical momentum created by gravity during a cycle. In addition, we added a virtual compliance control in the vertical direction to enhance stability. The stiffness of the virtual compliance is selected based on the eigenvalue analysis of the linearized Poincaré map and the chosen stiffness is 700 N/m, which corresponds to around 12% of the stiffness used in the previous trotting experiments of the MIT Cheetah, where the ground reaction forces are purely caused by the impedance controller with equilibrium point trajectories. This indicates that the virtual compliance control does not significantly contributes to generating ground reaction forces, but to stability. The experimental results show that the algorithm successfully prescribes the duty cycle for stable bounding gaits. This new approach can shed a light on variable speed running control algorithm.",
"title": ""
},
{
"docid": "44543930ea12872520e87dfa45e4bdc2",
"text": "In this paper, we characterize the delay profile of an Ethernet cross-traffic network statically loaded with one of the ITU-T network models and a larger Ethernet inline traffic loaded with uniformlysized packets, showing how the average time interval between consecutive minimum-delayed packets increases with increased network load. We compare three existing skew-estimation algorithms and show that the best performance is achieved by solving a linear programming problem on \"de-noised\" delay samples. This skew-estimation method forms the basis of a new sample-mode algorithm for packet delay variation filtering. We use numerical simulations in OPNET to illustrate the performance of the sample-mode filter in the networks. We compare the performance of the proposed PDV filter with those of the existing sample minimum, mean, and maximum filters and observe that the sample-mode filtering algorithm is able to match or outperform other types of filters, at different levels of network load.",
"title": ""
},
{
"docid": "166ea8466f5debc7c09880ba17c819e1",
"text": "Lymphoepithelioma-like carcinoma (LELCA) of the urinary bladder is a rare variant of bladder cancer characterized by a malignant epithelial component densely infiltrated by lymphoid cells. It is characterized by indistinct cytoplasmic borders and a syncytial growth pattern. These neoplasms deserve recognition and attention, chiefly because they may be responsive to chemotherapy. We report on the clinicopathologic features of 13 cases of LELCA recorded since 1981. The chief complaint in all 13 patients was hematuria. Their ages ranged from 58 years to 82 years. All tumors were muscle invasive. A significant lymphocytic reaction was present in all of these tumors. There were three pure LELCA and six predominant LELCA with a concurrent transitional cell carcinoma (TCC). The remainder four cases had a focal LELCA component admixed with TCC. Immunohistochemistry showed LELCA to be reactive against epithelial membrane antigen and several cytokeratins (CKs; AE1/AE3, AE1, AE3, CK7, and CK8). CK20 and CD44v6 stained focally. The lymphocytic component was composed of a mixture of T and B cells intermingled with some dendritic cells and histiocytes. Latent membrane protein 1 (LMP1) immunostaining and in situ hybridization for Epstein-Barr virus were negative in all 13 cases. DNA ploidy of these tumors gave DNA histograms with diploid peaks (n=7) or non-diploid peaks (aneuploid or tetraploid; n=6). All patients with pure and 66% with predominant LELCA were alive, while all patients having focal LELCA died of disease. Our data suggest that pure and predominant LELCA of the bladder appear to be morphologically and clinically different from other bladder (undifferentiated and poorly differentiated conventional TCC) carcinomas and should be recognized as separate clinicopathological variants of TCC with heavy lymphocytic reaction relevant in patient management.",
"title": ""
},
{
"docid": "3e88008841741d3d320a17490e5d9624",
"text": "In this project, the task of architecture classification for monuments and buildings from the Indian subcontinent was explored. Five major classes of architecture were taken and various supervised learning methods, both probabilistic and nonprobabilistic, were experimented with in order to classify the monuments into one of the five categories. The categories were: ’Ancient’, ’British’, ’IndoIslamic’, ’Maratha’ and ’Sikh’. Local ORB feature descriptors were used to represent each image and clustering was applied to quantize the obtained features to a smaller size. Other than the typical method of using features to do an image-wise classification, another method where descriptor wise classification is done was also explored. In this method, image label was provided as the mode of the labels of the descriptors of that image. It was found that among the different classifiers, k nearest neighbors for the case of descriptor-wise classification performed the best.",
"title": ""
},
{
"docid": "d390ba28e1bb9fdb72b2de8498838806",
"text": "Named Entity Disambiguation algorithms typically learn a single model for all target entities. In this paper we present a word expert model and train separate deep learning models for each target entity string, yielding 500K classification tasks. This gives us the opportunity to benchmark popular text representation alternatives on this massive dataset. In order to face scarce training data we propose a simple data-augmentation technique and transfer-learning. We show that bagof-word-embeddings are better than LSTMs for tasks with scarce training data, while the situation is reversed when having larger amounts. Transferring an LSTM which is learned on all datasets is the most effective context representation option for the word experts in all frequency bands. The experiments show that our system trained on out-ofdomain Wikipedia data surpasses comparable NED systems which have been trained on indomain training data.",
"title": ""
},
{
"docid": "8e2bc8c050ebaeb295f74f9e405ed280",
"text": "Multi-modal semantics has relied on feature norms or raw image data for perceptual input. In this paper we examine grounding semantic representations in raw auditory data, using standard evaluations for multi-modal semantics, including measuring conceptual similarity and relatedness. We also evaluate cross-modal mappings, through a zero-shot learning task mapping between linguistic and auditory modalities. In addition, we evaluate multimodal representations on an unsupervised musical instrument clustering task. To our knowledge, this is the first work to combine linguistic and auditory information into multi-modal representations.",
"title": ""
},
{
"docid": "da33e35d323c8bb4f699cb02b5ffe466",
"text": "Force sensing is a crucial task for robots, especially when the end effectors such as fingers and hands need to interact with an unknown environment, for example in a humanoid robot. In order to sense such forces, a force/torque sensor is an essential component. Many available force/torque sensors are based on strain gauges, but other sensing principles are also possible. In this paper we describe steps towards a capacitive type based sensor. Several MEMS capacitive sensors are described in the literature; however very few larger sensors are available, as capacitive sensors usually have disadvantages such as severe hysteresis and temperature sensitivity. On the other hand, capacitive sensors have the advantage of the availability of small sized chips for sensor readout and digitization. We employ copper beryllium for the transducer, which has been modified from the ones described in the literature to be able to be used in a small sized, robust force/torque sensor. Therefore, as the first step toward the goal of building such a sensor, in this study we have created a prototype sensing unit and have tested its sensitivity. No viscoelastic materials are used for the sensing unit, which usually introduce severe hysteresis in capacitive sensors. We have achieved a high signal-to-noise ratio, high sensitivity and a range of 10 Newton.",
"title": ""
},
{
"docid": "3770720cff3a36596df097835f4f10a9",
"text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.",
"title": ""
},
{
"docid": "a607addf74880bcbfc2f097ae4c06a31",
"text": "In this paper, we take an input-output approach to enhance the study of cooperative multiagent optimization problems that admit decentralized and selfish solutions, hence eliminating the need for an interagent communication network. The framework under investigation is a set of $n$ independent agents coupled only through an overall cost that penalizes the divergence of each agent from the average collective behavior. In the case of identical agents, or more generally agents with identical essential input-output dynamics, we show that optimal decentralized and selfish solutions are possible in a variety of standard input-output cost criteria. These include the cases of $\\ell_{1}, \\ell_{2}, \\ell_{\\infty}$ induced, and $\\mathcal{H}_{2}$ norms for any finite $n$. Moreover, if the cost includes non-deviation from average variables, the above results hold true as well for $\\ell_{1}, \\ell_{2}, \\ell_{\\infty}$ induced norms and any $n$, while they hold true for the normalized, per-agent square $\\mathcal{H}_{2}$ norm, cost as $n\\rightarrow\\infty$. We also consider the case of nonidentical agent dynamics and prove that similar results hold asymptotically as $n\\rightarrow\\infty$ in the case of $\\ell_{2}$ induced norms (i.e., $\\mathcal{H}_{\\infty}$) under a growth assumption on the $\\mathcal{H}_{\\infty}$ norm of the essential dynamics of the collective.",
"title": ""
},
{
"docid": "beb90397ff3d1ef0d71463fb2d9b1b97",
"text": "Due to the strong competition that exists today, most manufacturing organizations are in a continuous effort for increasing their profits and reducing their costs. Accurate sales forecasting is certainly an inexpensive way to meet the aforementioned goals, since this leads to improved customer service, reduced lost sales and product returns and more efficient production planning. Especially for the food industry, successful sales forecasting systems can be very beneficial, due to the short shelf-life of many food products and the importance of the product quality which is closely related to human health. In this paper we present a complete framework that can be used for developing nonlinear time series sales forecasting models. The method is a combination of two artificial intelligence technologies, namely the radial basis function (RBF) neural network architecture and a specially designed genetic algorithm (GA). The methodology is applied successfully to sales data of fresh milk provided by a major manufacturing company of dairy products. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e777794833a060f99e11675952cd3342",
"text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.",
"title": ""
},
{
"docid": "232891b57ea0ca1852fbe3e63157db26",
"text": "With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of securitycritical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.",
"title": ""
},
{
"docid": "35eb25ce7a3b178a11068af840c70816",
"text": "Entropic regularization is quickly emerging as a new standard in optimal transport (OT). It enables to cast the OT computation as a differentiable and unconstrained convex optimization problem, which can be efficiently solved using the Sinkhorn algorithm. However, entropy keeps the transportation plan strictly positive and therefore completely dense, unlike unregularized OT. This lack of sparsity can be problematic in applications where the transportation plan itself is of interest. In this paper, we explore regularizing the primal and dual OT formulations with a strongly convex term, which corresponds to relaxing the dual and primal constraints with smooth approximations. We show how to incorporate squared 2-norm and group lasso regularizations within that framework, leading to sparse and group-sparse transportation plans. On the theoretical side, we bound the approximation error introduced by regularizing the primal and dual formulations. Our results suggest that, for the regularized primal, the approximation error can often be smaller with squared 2-norm than with entropic regularization. We showcase our proposed framework on the task of color transfer.",
"title": ""
},
{
"docid": "ae12112f8c9434678ce4e09ee525f96e",
"text": "The More Electric Aircraft concept offers many potential benefits in the design and efficiency of future large, manned aircraft. In this article, typical aircraft electrical power systems and associated loads are described as well as the exciting future challenges for the aerospace industry. The importance of power electronics as an enabling technology for this step change in aircraft design is considered, and examples of typical system designs are discussed.",
"title": ""
}
] |
scidocsrr
|
a7567b7c83c2c0ae7eef467289a080cc
|
Part-based R-CNNs for Fine-grained Category Detection
|
[
{
"docid": "0d576c082ca9db68c0b8b614eb3df6b7",
"text": "This paper introduces FGVC-Aircraft, a new dataset containing 10,000 images of aircraft spanning 100 aircraft models, organised in a three-level hierarchy. At the finer level, differences between models are often subtle but always visually measurable, making visual recognition challenging but possible. A benchmark is obtained by defining corresponding classification tasks and evaluation protocols, and baseline results are presented. The construction of this dataset was made possible by the work of aircraft enthusiasts, a strategy that can extend to the study of number of other object classes. Compared to the domains usually considered in fine-grained visual classification (FGVC), for example animals, aircraft are rigid and hence less deformable. They, however, present other interesting modes of variation, including purpose, size, designation, structure, historical style, and branding.",
"title": ""
},
{
"docid": "9b8072d38753fc64199693a44297a135",
"text": "We propose a segmentation algorithm for the purposes of large-scale flower species recognition. Our approach is based on identifying potential object regions at the time of detection. We then apply a Laplacian-based segmentation, which is guided by these initially detected regions. More specifically, we show that 1) recognizing parts of the potential object helps the segmentation and makes it more robust to variabilities in both the background and the object appearances, 2) segmenting the object of interest at test time is beneficial for the subsequent recognition. Here we consider a large-scale dataset containing 578 flower species and 250,000 images. This dataset is developed by our team for the purposes of providing a flower recognition application for general use and is the largest in its scale and scope. We tested the proposed segmentation algorithm on the well-known 102 Oxford flowers benchmark [11] and on the new challenging large-scale 578 flower dataset, that we have collected. We observed about 4% improvements in the recognition performance on both datasets compared to the baseline. The algorithm also improves all other known results on the Oxford 102 flower benchmark dataset. Furthermore, our method is both simpler and faster than other related approaches, e.g. [3, 14], and can be potentially applicable to other subcategory recognition datasets.",
"title": ""
},
{
"docid": "94fd7030e7b638e02ca89f04d8ae2fff",
"text": "State-of-the-art deep learning algorithms generally require large amounts of data for model training. Lack thereof can severely deteriorate the performance, particularly in scenarios with fine-grained boundaries between categories. To this end, we propose a multimodal approach that facilitates bridging the information gap by means of meaningful joint embeddings. Specifically, we present a benchmark that is multimodal during training (i.e. images and texts) and single-modal in testing time (i.e. images), with the associated task to utilize multimodal data in base classes (with many samples), to learn explicit visual classifiers for novel classes (with few samples). Next, we propose a framework built upon the idea of cross-modal data hallucination. In this regard, we introduce a discriminative text-conditional GAN for sample generation with a simple self-paced strategy for sample selection. We show the results of our proposed discriminative hallucinated method for 1-, 2-, and 5shot learning on the CUB dataset, where the accuracy is improved by employing multimodal data.",
"title": ""
}
] |
[
{
"docid": "8d45138ec69bb4ee47efa088c03d7a42",
"text": "Precision medicine is at the forefront of biomedical research. Cancer registries provide rich perspectives and electronic health records (EHRs) are commonly utilized to gather additional clinical data elements needed for translational research. However, manual annotation is resource-intense and not readily scalable. Informatics-based phenotyping presents an ideal solution, but perspectives obtained can be impacted by both data source and algorithm selection. We derived breast cancer (BC) receptor status phenotypes from structured and unstructured EHR data using rule-based algorithms, including natural language processing (NLP). Overall, the use of NLP increased BC receptor status coverage by 39.2% from 69.1% with structured medication information alone. Using all available EHR data, estrogen receptor-positive BC cases were ascertained with high precision (P = 0.976) and recall (R = 0.987) compared with gold standard chart-reviewed patients. However, status negation (R = 0.591) decreased 40.2% when relying on structured medications alone. Using multiple EHR data types (and thorough understanding of the perspectives offered) are necessary to derive robust EHR-based precision medicine phenotypes.",
"title": ""
},
{
"docid": "1dfe7a3e875436db76496931db34c7db",
"text": "Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience.",
"title": ""
},
{
"docid": "f32110ae02928ddca1821101d483752c",
"text": "Maternal neglect, including physical and emotional neglect, is a pervasive public health challenge with serious long-term effects on child health and development. I provide an overview of the neurobiological basis of maternal caregiving, aiming to better understand how to prevent and respond to maternal neglect. Drawing from both animal and human studies, key biological systems are identified that contribute to maternal caregiving behaviour, focusing on the oxytocinergic and dopaminergic systems. Mesocorticolimbic and nigrostriatal dopamine pathways contribute to the processing of infant-related sensory cues leading to a behavioural response. Oxytocin may activate the dopaminergic reward pathways in response to social cues. Human neuroimaging studies are summarised that demonstrate parallels between animal and human maternal caregiving responses in the brain. By comparing different patterns of human adult attachment, we gain a clearer understanding of how differences in maternal brain and endocrine responses may contribute to maternal neglect. For example, in insecure/dismissing attachment, which may be associated with emotional neglect, we see reduced activation of the mesocorticolimbic dopamine reward system in response to infant face cues, as well as decreased peripheral oxytocin response to mother-infant contact. We are currently testing whether the administration of intranasal oxytocin, as part of a randomised placebo controlled trial, may reverse some of these neurological differences, and potentially augment psychosocial and behavioural interventions for maternal neglect.",
"title": ""
},
{
"docid": "4255fd867660b8a6998c058508339e90",
"text": "This paper presents the concept of a new robotic joint composed of two electric motors as inputs, an epicyclic gearing system for the transmission, and a single output. The proposed joint mechanism has a wider range of speed and torque performances comparatively to a traditional robot joint using a single motor and gearbox. The dynamic equations for the mechanical transmission system are given and a dual-motor joint mechanism is designed and prototyped to test this new concept of robotic joint. Also, the potential advantages of this joint concept for the design of manipulators for which a wide range of performances are desired are discussed. This work is motivated by the development of field robots designed for the operation and maintenance tasks in power distribution lines.",
"title": ""
},
{
"docid": "3f9a46f472ab276c39fb96b78df132ee",
"text": "In this paper, we present a novel technique that enables capturing of detailed 3D models from flash photographs integrating shading and silhouette cues. Our main contribution is an optimization framework which not only captures subtle surface details but also handles changes in topology. To incorporate normals estimated from shading, we employ a mesh-based deformable model using deformation gradient. This method is capable of manipulating precise geometry and, in fact, it outperforms previous methods in terms of both accuracy and efficiency. To adapt the topology of the mesh, we convert the mesh into an implicit surface representation and then back to a mesh representation. This simple procedure removes self-intersecting regions of the mesh and solves the topology problem effectively. In addition to the algorithm, we introduce a hand-held setup to achieve multi-view photometric stereo. The key idea is to acquire flash photographs from a wide range of positions in order to obtain a sufficient lighting variation even with a standard flash unit attached to the camera. Experimental results showed that our method can capture detailed shapes of various objects and cope with topology changes well.",
"title": ""
},
{
"docid": "8701fb24bd6f3138e3b7d75f37f2ba87",
"text": "Internet of Things (IoT) is an integral part of application domains such as smart-home and digital healthcare. Various standard public key cryptography techniques (e.g., key exchange, public key encryption, signature) are available to provide fundamental security services for IoTs. However, despite their pervasiveness and well-proven security, they also have been shown to be highly energy costly for embedded devices. Hence, it is a critical task to improve the energy efficiency of standard cryptographic services, while preserving their desirable properties simultaneously.\n In this paper, we exploit synergies among various cryptographic primitives with algorithmic optimizations to substantially reduce the energy consumption of standard cryptographic techniques on embedded devices. Our contributions are: (i) We harness special precomputation techniques, which have not been considered for some important cryptographic standards to boost the performance of key exchange, integrated encryption, and hybrid constructions. (ii) We provide self-certification for these techniques to push their performance to the edge. (iii) We implemented our techniques and their counterparts on 8-bit AVR ATmega 2560 and evaluated their performance. We used microECC library and made the implementations on NIST-recommended secp192 curve, due to its standardization. Our experiments confirmed significant improvements on the battery life (up to 7x) while preserving the desirable properties of standard techniques. Moreover, to the best of our knowledge, we provide the first open-source framework including such set of optimizations on low-end devices.",
"title": ""
},
{
"docid": "a8553e9f90e8766694f49dcfdeab83b7",
"text": "The need for solid-state ac-dc converters to improve power quality in terms of power factor correction, reduced total harmonic distortion at input ac mains, and precisely regulated dc output has motivated the investigation of several topologies based on classical converters such as buck, boost, and buck-boost converters. Boost converters operating in continuous-conduction mode have become particularly popular because reduced electromagnetic interference levels result from their utilization. Within this context, this paper introduces a bridgeless boost converter based on a three-state switching cell (3SSC), whose distinct advantages are reduced conduction losses with the use of magnetic elements with minimized size, weight, and volume. The approach also employs the principle of interleaved converters, as it can be extended to a generic number of legs per winding of the autotransformers and high power levels. A literature review of boost converters based on the 3SSC is initially presented so that key aspects are identified. The theoretical analysis of the proposed converter is then developed, while a comparison with a conventional boost converter is also performed. An experimental prototype rated at 1 kW is implemented to validate the proposal, as relevant issues regarding the novel converter are discussed.",
"title": ""
},
{
"docid": "e4f62bc47ca11c5e4c7aff5937d90c88",
"text": "CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.",
"title": ""
},
{
"docid": "724c74408f59edaf1b1b4859ccd43ee9",
"text": "Motion sickness is a common disturbance occurring in healthy people as a physiological response to exposure to motion stimuli that are unexpected on the basis of previous experience. The motion can be either real, and therefore perceived by the vestibular system, or illusory, as in the case of visual illusion. A multitude of studies has been performed in the last decades, substantiating different nauseogenic stimuli, studying their specific characteristics, proposing unifying theories, and testing possible countermeasures. Several reviews focused on one of these aspects; however, the link between specific nauseogenic stimuli and the unifying theories and models is often not clearly detailed. Readers unfamiliar with the topic, but studying a condition that may involve motion sickness, can therefore have difficulties to understand why a specific stimulus will induce motion sickness. So far, this general audience struggles to take advantage of the solid basis provided by existing theories and models. This review focuses on vestibular-only motion sickness, listing the relevant motion stimuli, clarifying the sensory signals involved, and framing them in the context of the current theories.",
"title": ""
},
{
"docid": "b51021e995fc4be50028a0a152db7e7a",
"text": "Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints, which must satisfy a set of geometric constraints and interdependence imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most efficient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features, which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used human pose estimation benchmarks. Our approach achieves state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "4e56d4b3fe5ed2285487ea98915a359c",
"text": "A 1.2 V 60 GHz 120 mW phase-locked loop employing a quadrature differential voltage-controlled oscillator, a programmable charge pump, and a frequency quadrupler is presented. Implemented in a 90 m CMOS process and operating at 60 GHz with a 1.2 V supply, the PLL achieves a phase noise of −91 dBc/Hz at a frequency offset of 1 MHz.",
"title": ""
},
{
"docid": "5b07bc318cb0f5dd7424cdcc59290d31",
"text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.",
"title": ""
},
{
"docid": "3500278940baaf6f510ad47463cbf5ed",
"text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.",
"title": ""
},
{
"docid": "08bef09a01414bafcbc778fea85a7c0a",
"text": "The use.of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.",
"title": ""
},
{
"docid": "5c3b5f415c2789a01e0314487c281dee",
"text": "In this paper we propose two algorithms for numerical fractional integration and Caputo fractional differentiation. We present a modification of trapezoidal rule that is used to approximate finite integrals, the new modification extends the application of the rule to approximate integrals of arbitrary order a > 0. We then, using the new modification derive an algorithm to approximate fractional derivatives of arbitrary order a > 0, where the fractional derivative based on Caputo definition, for a given function by a weighted sum of function and its ordinary derivatives values at specified points. The study is conducted through illustrative examples and error analysis. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "225c54aa742d510876a413ff66804b46",
"text": "Various models and derived measures of arterial function have been proposed to describe and quantify pulsatile hemodynamics in humans. A major distinction can be drawn between lumped models based on circuit theory that assume infinite pulse wave velocity versus distributed, propagative models based on transmission line theory that acknowledge finite wave velocity and account for delays, wave reflection, and spatial and temporal pressure gradients within the arterial system. Although both approaches have produced useful insights into human arterial pathophysiology, there are important limitations of the lumped approach. The arterial system is heterogeneous and various segments respond differently to cardiovascular disease risk factors including advancing age. Lumping divergent change into aggregate summary variables can obscure abnormalities in regional arterial function. Analysis of a limited number of summary variables obtained by measuring aortic input impedance may provide novel insights and inform development of new treatments aimed at preventing or reversing abnormal pulsatile hemodynamics.",
"title": ""
},
{
"docid": "ea0ee8011eacdd00cdc8ba3df4eeee6f",
"text": "Despite the highest classification accuracy in wide varieties of application areas, artificial neural network has one disadvantage. The way this Network comes to a decision is not easily comprehensible. The lack of explanation ability reduces the acceptability of neural network in data mining and decision system. This drawback is the reason why researchers have proposed many rule extraction algorithms to solve the problem. Recently, Deep Neural Network (DNN) is achieving a profound result over the standard neural network for classification and recognition problems. It is a hot machine learning area proven both useful and innovative. This paper has thoroughly reviewed various rule extraction algorithms, considering the classification scheme: decompositional, pedagogical, and eclectics. It also presents the evaluation of these algorithms based on the neural network structure with which the algorithm is intended to work. The main contribution of this review is to show that there is a limited study of rule extraction algorithm from DNN. KeywordsArtificial neural network; Deep neural network; Rule extraction; Decompositional; Pedagogical; Eclectic.",
"title": ""
},
{
"docid": "e860516582423405be4b83001ae3a1c3",
"text": "Purpose – The purpose of the paper is to improve traditional knowledge management models in light of complexity theory, emphasizing the importance of moving away from hierarchical relationships among data, information, knowledge, and wisdom. Design/methodology/approach – Traditional definitions and models are critically reviewed and their weaknesses highlighted. A transformational perspective of the traditional hierarchies is proposed to highlight the need to develop better perspectives. The paper demonstrates the holistic nature of data, information, knowledge, and wisdom, and how they are all based on an interpretation of existence. Findings – Existing models are logically extended, by adopting a complexity-based perspective, to propose a new model – the E2E model – which highlights the non-linear relationships among existence, data, information, knowledge, wisdom, and enlightenment, as well as the nature of understanding as the process that defines the differences among these constructs. The meaning of metas (such as meta-data, meta-information, and meta-knowledge) is discussed, and a reconstitution of knowledge management is proposed. Practical implications – The importance of understanding as a concept to create useful metaphors for knowledge management practitioners is emphasized, and the crucial importance of the metas for knowledge management is shown. Originality/value – A new model of the cognitive system of knowledge is proposed, based on application of complexity theory to knowledge management. Understanding is identified as the basis of the conversion process among an extended range of knowledge constructs, and the scope of knowledge management is redefined.",
"title": ""
}
] |
scidocsrr
|
05820bc33154ef4fb9e5e6947f91e643
|
A Review and Future Perspectives of Arabic Question Answering Systems
|
[
{
"docid": "afd00b4795637599f357a7018732922c",
"text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.",
"title": ""
},
{
"docid": "aafda1cab832f1fe92ce406676e3760f",
"text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.",
"title": ""
},
{
"docid": "7930ba041e38ded81871866da8681a9d",
"text": "We describe the design and implementation of a question answering (QA) system called QARAB. It is a system that takes natural language questions expressed in the Arabic language and attempts to provide short answers. The system’s primary source of knowledge is a collection of Arabic newspaper text extracted from Al-Raya, a newspaper published in Qatar. During the last few years the information retrieval community has attacked this problem for English using standard IR techniques with only mediocre success. We are tackling this problem for Arabic using traditional Information Retrieval (IR) techniques coupled with a sophisticated Natural Language Processing (NLP) approach. To identify the answer, we adopt a keyword matching strategy along with matching simple structures extracted from both the question and the candidate documents selected by the IR system. To achieve this goal, we use an existing tagger to identify proper names and other crucial lexical items and build lexical entries for them on the fly. We also carry out an analysis of Arabic question forms and attempt a better understanding of what kinds of answers users find satisfactory. The paucity of studies of real users has limited results in earlier research.",
"title": ""
},
{
"docid": "48f261e94383c49fc63e9c4341236033",
"text": "Due to very fast growth of information in the last few decades, getting precise information in real time is becoming increasingly difficult. Search engines such as Google and Yahoo are helping in finding the information but the information provided by them are in the form of documents which consumes a lot of time of the user. Question Answering Systems have emerged as a good alternative to search engines where they produce the desired information in a very precise way in the real time. This saves a lot of time for the user. There has been a lot of research in the field of English and some European language Question Answering Systems. However, Arabic Question Answering Systems could not match the pace due to some inherent difficulties with the language itself as well as due to lack of tools available to assist the researchers. Question classification is a very important module of Question Answering Systems. In this paper, we are presenting a method to accurately classify the Arabic questions in order to retrieve precise answers. The proposed method gives promising results.",
"title": ""
}
] |
[
{
"docid": "721d26f8ea042c2fb3a87255a69e85f5",
"text": "The Time-Triggered Protocol (TTP), which is intended for use in distributed real-time control applications that require a high dependability and guaranteed timeliness, is discussed. It integrates all services that are required in the design of a fault-tolerant real-time system, such as predictable message transmission, message acknowledgment in group communication, clock synchronization, membership, rapid mode changes, redundancy management, and temporary blackout handling. It supports fault-tolerant configurations with replicated nodes and replicated communication channels. TTP provides these services with a small overhead so it can be used efficiently on twisted pair channels as well as on fiber optic networks.",
"title": ""
},
{
"docid": "1ba846444a638ccc9ccaa930328fff23",
"text": "Many practical frequency-modulated continuous-wave (FMCW) radars utilize consecutive upchirps and/or downchirps of the same ramp slope to extract the desired range and velocity information of the targets. In this contribution it is demonstrated that consecutive ramp sequences provide only little more information compared to a non-consecutive sequence, but lead to a huge calculation complexity. Additional significant information on the target states for a non-consecutive sequence is gained by using the ramp slope as a design parameter. The ramp sequence distributed over a certain period is temporally aligned to a given point in time. State estimation is done by minimizing a cost function. A significant advantage of a cost function approach is that ghost targets are suppressed directly. For test purposes a 77-GHz FMCW radar prototype is used in an automotive environment.",
"title": ""
},
{
"docid": "cb9f89949979f2144e45e06dccdde2e8",
"text": "This paper describes the double mode surface acoustic wave (DMS) filter design techniques for achieving the ultra-steep cut-off characteristics and low insertion loss required for the Rx filter in the personal communications services (PCS) duplexer. Simulations demonstrate that the optimal combination of the additional common ground inductance Lg and the coupling capacitance Cc between the input and output terminals of the DMS filters drastically enhances the skirt steepness and attenuation for the lower frequency side of the passband. Based on this result, we propose a novel DMS filter structure that utilizes the parasitic reactance generated in bonding wires and interdigital transducer (IDT) busbars as Lg and Cc, respectively. Because the proposed structure does not need any additional reactance component, the filter size can be small. Moreover, we propose a compact multiple-connection configuration for low insertion loss. Applying these technologies to the Rx filter, we successfully develop a PCS SAW duplexer.",
"title": ""
},
{
"docid": "deaed3405c242023f6c52a777f25ba88",
"text": "Adipose tissue is a complex, essential, and highly active metabolic and endocrine organ. Besides adipocytes, adipose tissue contains connective tissue matrix, nerve tissue, stromovascular cells, and immune cells. Together these components function as an integrated unit. Adipose tissue not only responds to afferent signals from traditional hormone systems and the central nervous system but also expresses and secretes factors with important endocrine functions. These factors include leptin, other cytokines, adiponectin, complement components, plasminogen activator inhibitor-1, proteins of the renin-angiotensin system, and resistin. Adipose tissue is also a major site for metabolism of sex steroids and glucocorticoids. The important endocrine function of adipose tissue is emphasized by the adverse metabolic consequences of both adipose tissue excess and deficiency. A better understanding of the endocrine function of adipose tissue will likely lead to more rational therapy for these increasingly prevalent disorders. This review presents an overview of the endocrine functions of adipose tissue.",
"title": ""
},
{
"docid": "9bbd6a417b373fb19f691d1edc728a6c",
"text": "The increasing advances in hardware technology for sensor processing and mobile technology has resulted in greater access and availability of sensor data from a wide variety of applications. For example, the commodity mobile devices contain a wide variety of sensors such as GPS, accelerometers, and other kinds of data. Many other kinds of technology such as RFID-enabled sensors also produce large volumes of data over time. This has lead to a need for principled methods for efficient sensor data processing. This chapter will provide an overview of the challenges of sensor data analytics and the different areas of research in this context. We will also present the organization of the chapters in this book in this context.",
"title": ""
},
{
"docid": "0a8300fd3760223f5bf0df3d1187a6a5",
"text": "The glare illusion is commonly used in CG rendering, especially in game engines, to achieve a higher brightness than that of the maximum luminance of a display. In this work, we measure the perceived luminance of the glare illusion in a psychophysical experiment. To evoke the illusion, an image is convolved with either a point spread function (PSF) of the eye or a Gaussian kernel. It is found that 1) the Gaussian kernel evokes an illusion of the same or higher strength than that produced by the PSF while being computationally much less expensive, 2) the glare illusion can raise the perceived luminance by 20 -- 35%, 3) some convolution kernels can produce undesirable Mach-band effects and thereby reduce the brightness boost of the glare illusion. The reported results have practical implications for glare rendering in computer graphics.",
"title": ""
},
{
"docid": "93e945261d5d04f51a8274b84d3a3231",
"text": "Cloud service providers (CSPs) often overbook their resources with user applications despite having to maintain service-level agreements with their customers. Overbooking is attractive to CSPs because it helps to reduce power consumption in the data center by packing more user jobs in less number of resources while improving their profits. Overbooking becomes feasible because user applications tend to overestimate their resource requirements utilizing only a fraction of the allocated resources. Arbitrary resource overbooking ratios, however, may be detrimental to soft real-time applications, such as airline reservations or Netflix video streaming, which are increasingly hosted in the cloud. The changing dynamics of the cloud preclude an offline determination of overbooking ratios. To address these concerns, this paper presents iOverbook, which uses a machine learning approach to make systematic and online determination of overbooking ratios such that the quality of service needs of soft real-time systems can be met while still benefiting from overbooking. Specifically, iOverbook utilizes historic data of tasks and host machines in the cloud to extract their resource usage patterns and predict future resource usage along with the expected mean performance of host machines. To evaluate our approach, we have used a large usage trace made available by Google of one of its production data centers. In the context of the traces, our experiments show that iOverbook can help CSPs improve their resource utilization by an average of 12.5% and save 32% power in the data center.",
"title": ""
},
{
"docid": "75ef3706a44edf1a96bcb0ce79b07761",
"text": "Bag-of-words (BOW), which represents an image by the histogram of local patches on the basis of a visual vocabulary, has attracted intensive attention in visual categorization due to its good performance and flexibility. Conventional BOW neglects the contextual relations between local patches due to its Naïve Bayesian assumption. However, it is well known that contextual relations play an important role for human beings to recognize visual categories from their local appearance. This paper proposes a novel contextual bag-of-words (CBOW) representation to model two kinds of typical contextual relations between local patches, i.e., a semantic conceptual relation and a spatial neighboring relation. To model the semantic conceptual relation, visual words are grouped on multiple semantic levels according to the similarity of class distribution induced by them, accordingly local patches are encoded and images are represented. To explore the spatial neighboring relation, an automatic term extraction technique is adopted to measure the confidence that neighboring visual words are relevant. Word groups with high relevance are used and their statistics are incorporated into the BOW representation. Classification is taken using the support vector machine with an efficient kernel to incorporate the relational information. The proposed approach is extensively evaluated on two kinds of visual categorization tasks, i.e., video event and scene categorization. Experimental results demonstrate the importance of contextual relations of local patches and the CBOW shows superior performance to conventional BOW.",
"title": ""
},
{
"docid": "a73917d842c18ed9c36a13fe9187ea4c",
"text": "Brain Magnetic Resonance Image (MRI) plays a non-substitutive role in clinical diagnosis. The symptom of many diseases corresponds to the structural variants of brain. Automatic structure segmentation in brain MRI is of great importance in modern medical research. Some methods were developed for automatic segmenting of brain MRI but failed to achieve desired accuracy. In this paper, we proposed a new patch-based approach for automatic segmentation of brain MRI using convolutional neural network (CNN). Each brain MRI acquired from a small portion of public dataset is firstly divided into patches. All of these patches are then used for training CNN, which is used for automatic segmentation of brain MRI. Experimental results showed that our approach achieved better segmentation accuracy compared with other deep learning methods.",
"title": ""
},
{
"docid": "ff69af9c6ce771b0db8caeaa6da5478f",
"text": "The use of Internet as a mean of shopping goods and services is growing over the past decade. Businesses in the e-commerce sector realize that the key factors for success are not limited to the existence of a website and low prices but must also include high standards of e-quality. Research indicates that the attainment of customer satisfaction brings along plenty of benefits. Furthermore, trust is of paramount importance, in ecommerce, due to the fact that that its establishment can diminish the perceived risk of using an internet service. The purpose of this study is to investigate the impact of customer perceived quality of an internet shop on customers’ satisfaction and trust. In addition, the possible effect of customer satisfaction on trust is also examined. An explanatory research approach was adopted in order to identify causal relationships between e-quality, customer satisfaction and trust. This was accomplished through field research by utilizing an interviewer-administered questionnaire. The questionnaire was largely based on existing constructs in relative literature. E-quality was divided into 5 dimensions, namely ease of use, e-scape, customization, responsiveness, and assurance. After being successfully pilot-tested by the managers of 3 Greek companies developing ecommerce software, 4 managers of Greek internet shops and 5 internet shoppers, the questionnaire was distributed to internet shoppers in central Greece. This process had as a result a total of 171 correctly answered questionnaires. Reliability tests and statistical analyses were performed to both confirm scale reliability and test research hypotheses. The findings indicate that all the examined e-quality dimensions expose a significant positive influence on customer satisfaction, with ease of use, e-scape and assurance being the most important ones. One the other hand, rather surprisingly, the only e-quality dimension that proved to have a significant positive impact on trust was customization. Finally, satisfaction was revealed to have a significant positive relation with trust.",
"title": ""
},
{
"docid": "a4ec796aa94914eead676eac4a688753",
"text": "Providing transactional primitives of NAND flash based solid state disks (SSDs) have demonstrated a great potential for high performance transaction processing and relieving software complexity. Similar with software solutions like write-ahead logging (WAL) and shadow paging, transactional SSD has two parts of overhead which include: 1) write overhead under normal condition, and 2) recovery overhead after power failures. Prior transactional SSD designs utilize out-of-band (OOB) area in flash pages to store transaction information to reduce the first part of overhead. However, they are required to scan a large part of or even whole SSD after power failures to abort unfinished transactions. Another limitation of prior approaches is the unicity of transactional primitive they provided. In this paper, we propose a new transactional SSD design named Möbius. Möbius provides different types of transactional primitives to support static and dynamic transactions separately. Möbius flash translation layer (mFTL), which combines normal FTL with transaction processing by storing mapping and transaction information together in a physical flash page as atom inode. By amortizing the cost of transaction processing with FTL persistence, MFTL achieve high performance in normal condition and does not increase write amplification ratio. After power failures, Möbius can leverage atom inode to eliminate unnecessary scanning and recover quickly. We implemented a prototype of Möbius and compare it with other state-of-art transactional SSD designs. Experimental results show that Möbius can at most 67% outperform in transaction throughput (TPS) and 29 times outperform in recovery time while still have similar or even better write amphfication ratio comparing with prior hardware approaches.",
"title": ""
},
{
"docid": "b867d81593998fb13359b19f52e3923e",
"text": "VoIP (Voice over IP) is a modern service with enormous potential for yet further growth. It uses the already available and universally implemented IP transport platform. One significant problem, however, is ensuring the Quality of Service, abbreviated QoS. This paper addresses exactly that issue. In an extensive investigation the influence of jitter buffers on QoS is being examined in depth. Two implementations, namely a passive FIFO buffer and an active PJSIP buffer are considered. The results obtained are presented in several diagrams and interpreted. They provide valuable insights and indications as to how procedures to ensure QoS in IP networks can be planned and implemented. The paper concludes with a summary and outlook on further work.",
"title": ""
},
{
"docid": "f08d5e22264bf287355308330f67d564",
"text": "Group-by is a core database operation that is used extensively in OLTP, OLAP, and decision support systems. In many application scenarios, it is required to group similar but not necessarily equal values. In this paper we propose a new SQL construct that supports similarity-based Group-by (SGB). SGB is not a new clustering algorithm, but rather is a practical and fast similarity grouping query operator that is compatible with other SQL operators and can be combined with them to answer similarity-based queries efficiently. In contrast to expensive clustering algorithms, the proposed similarity group-by operator maintains low execution times while still generating meaningful groupings that address many application needs. The paper presents a general definition of the similarity group-by operation and gives three instances of this definition. The paper also discusses how optimization techniques for the regular group-by can be extended to the case of SGB. The proposed operators are implemented inside PostgreSQL. The performance study shows that the proposed similarity-based group-by operators have good scalability properties with at most only 25% increase in execution time over the regular group-by.",
"title": ""
},
{
"docid": "dd726458660c3dfe05bd775df562e188",
"text": "Maternally deprived rats were treated with tianeptine (15 mg/kg) once a day for 14 days during their adult phase. Their behavior was then assessed using the forced swimming and open field tests. The BDNF, NGF and energy metabolism were assessed in the rat brain. Deprived rats increased the immobility time, but tianeptine reversed this effect and increased the swimming time; the BDNF levels were decreased in the amygdala of the deprived rats treated with saline and the BDNF levels were decreased in the nucleus accumbens within all groups; the NGF was found to have decreased in the hippocampus, amygdala and nucleus accumbens of the deprived rats; citrate synthase was increased in the hippocampus of non-deprived rats treated with tianeptine and the creatine kinase was decreased in the hippocampus and amygdala of the deprived rats; the mitochondrial complex I and II–III were inhibited, and tianeptine increased the mitochondrial complex II and IV in the hippocampus of the non-deprived rats; the succinate dehydrogenase was increased in the hippocampus of non-deprived rats treated with tianeptine. So, tianeptine showed antidepressant effects conducted on maternally deprived rats, and this can be attributed to its action on the neurochemical pathways related to depression.",
"title": ""
},
{
"docid": "4c30af9dd05b773ce881a312bcad9cb9",
"text": "This review summarized various chemical recycling methods for PVC, such as pyrolysis, catalytic dechlorination and hydrothermal treatment, with a view to solving the problem of energy crisis and the impact of environmental degradation of PVC. Emphasis was paid on the recent progress on the pyrolysis of PVC, including co-pyrolysis of PVC with biomass/coal and other plastics, catalytic dechlorination of raw PVC or Cl-containing oil and hydrothermal treatment using subcritical and supercritical water. Understanding the advantage and disadvantage of these treatment methods can be beneficial for treating PVC properly. The dehydrochlorination of PVC mainly happed at low temperature of 250-320°C. The process of PVC dehydrochlorination can catalyze and accelerate the biomass pyrolysis. The intermediates from dehydrochlorination stage of PVC can increase char yield of co-pyrolysis of PVC with PP/PE/PS. For the catalytic degradation and dechlorination of PVC, metal oxides catalysts mainly acted as adsorbents for the evolved HCl or as inhibitors of HCl formation depending on their basicity, while zeolites and noble metal catalysts can produce lighter oil, depending the total number of acid sites and the number of accessible acidic sites. For hydrothermal treatment, PVC decomposed through three stages. In the first region (T<250°C), PVC went through dehydrochlorination to form polyene; in the second region (250°C<T<350°C), polyene decomposed to low-molecular weight compounds; in the third region (350°C<T), polyene further decomposed into a large amount of low-molecular weight compounds.",
"title": ""
},
{
"docid": "d79a1a6398e98855ddd1181c141d7b00",
"text": "In this paper we describe a new binarisation method designed specifically for OCR of low quality camera images: Background Surface Thresholding or BST. This method is robust to lighting variations and produces images with very little noise and consistent stroke width. BST computes a ”surface” of background intensities at every point in the image and performs adaptive thresholding based on this result. The surface is estimated by identifying regions of lowresolution text and interpolating neighbouring background intensities into these regions. The final threshold is a combination of this surface and a global offset. According to our evaluation BST produces considerably fewer OCR errors than Niblack’s local average method while also being more runtime efficient.",
"title": ""
},
{
"docid": "48109c78ad73b1973be3f20a7e6acf26",
"text": "Clustering by integrating multiview representations has become a crucial issue for knowledge discovery in heterogeneous environments. However, most prior approaches assume that the multiple representations share the same dimension, limiting their applicability to homogeneous environments. In this paper, we present a novel tensor-based framework for integrating heterogeneous multiview data in the context of spectral clustering. Our framework includes two novel formulations; that is multiview clustering based on the integration of the Frobenius-norm objective function (MC-FR-OI) and that based on matrix integration in the Frobenius-norm objective function (MC-FR-MI). We show that the solutions for both formulations can be computed by tensor decompositions. We evaluated our methods on synthetic data and two real-world data sets in comparison with baseline methods. Experimental results demonstrate that the proposed formulations are effective in integrating multiview data in heterogeneous environments.",
"title": ""
},
{
"docid": "78f272578191996200259e10d209fe19",
"text": "The information in government web sites, which are widely adopted in many countries, must be accessible for all people, easy to use, accurate and secure. The main objective of this study is to investigate the usability, accessibility and security aspects of e-government web sites in Kyrgyz Republic. The analysis of web government pages covered 55 sites listed in the State Information Resources of the Kyrgyz Republic and five government web sites which were not included in the list. Analysis was conducted using several automatic evaluation tools. Results suggested that government web sites in Kyrgyz Republic have a usability error rate of 46.3 % and accessibility error rate of 69.38 %. The study also revealed security vulnerabilities in these web sites. Although the “Concept of Creation and Development of Information Network of the Kyrgyz Republic” was launched at September 23, 1994, government web sites in the Kyrgyz Republic have not been reviewed and still need great efforts to improve accessibility, usability and security.",
"title": ""
},
{
"docid": "66acaa4909502a8d7213366e0667c3c2",
"text": "Facial rejuvenation, particularly lip augmentation, has gained widespread popularity. An appreciation of perioral anatomy as well as the structural characteristics that define the aging face is critical to achieve optimal patient outcomes. Although techniques and technology evolve continuously, hyaluronic acid (HA) dermal fillers continue to dominate aesthetic practice. A combination approach including neurotoxin and volume restoration demonstrates superior results in select settings.",
"title": ""
},
{
"docid": "2578607ec2e7ae0d2e34936ec352ff6e",
"text": "AI Innovation in Industry is a new department for IEEE Intelligent Systems, and this paper examines some of the basic concerns and uses of AI for big data (AI has been used in several different ways to facilitate capturing and structuring big data, and it has been used to analyze big data for key insights).",
"title": ""
}
] |
scidocsrr
|
36648619b1256c6851371e465190c068
|
An inquiry into the nature and causes of the wealth of internet miscreants
|
[
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
},
{
"docid": "9db9902c0e9d5fc24714554625a04c7a",
"text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.",
"title": ""
}
] |
[
{
"docid": "7e848e98909c69378f624ce7db31dbfa",
"text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.",
"title": ""
},
{
"docid": "7154894c0acda12246877c8f3ab8ab57",
"text": "ABSTRACT Characteristics of variable threshold voltage CMOS (VTCMOS) in the series connected circuits are investigated by means of device simulation. It is newly found that the performance degradation due to the body effect in series connected circuit is suppressed by utilizing VTCMOS. Lowering the threshold voltage (Vth) enhances the drive current and alleviates the degradation due to the series connected configuration. Therefore, larger body effect factor (γ) results in lower Vth and higher oncurrent even in the series connected circuits. These characteristics are attributed to the velocity saturation phenomenon which reduces the drain saturation voltage (Vdsat).",
"title": ""
},
{
"docid": "03b48d35417f4bdae67d46c761f2ce0b",
"text": "We present a unified statistical theory for assessing the significance of apparent signal observed in noisy difference images. The results are usable in a wide range of applications, including fMRI, but are discussed with particular reference to PET images which represent changes in cerebral blood flow elicited by a specific cognitive or sensorimotor task. Our main result is an estimate of the P-value for local maxima of Gaussian, t, chi(2) and F fields over search regions of any shape or size in any number of dimensions. This unifies the P-values for large search areas in 2-D (Friston et al. [1991]: J Cereb Blood Flow Metab 11:690-699) large search regions in 3-D (Worsley et al. [1992]: J Cereb Blood Flow Metab 12:900-918) and the usual uncorrected P-value at a single pixel or voxel.",
"title": ""
},
{
"docid": "c6d84be944630cec1b19d84db2ace2ee",
"text": "This paper describes an effort to model a student’s changing knowledge state during skill acquisition. Dynamic Bayes Nets (DBNs) provide a powerful way to represent and reason about uncertainty in time series data, and are therefore well-suited to model student knowledge. Many general-purpose Bayes net packages have been implemented and distributed; however, constructing DBNs often involves complicated coding effort. To address this problem, we introduce a tool called BNTSM. BNT-SM inputs a data set and a compact XML specification of a Bayes net model hypothesized by a researcher to describe causal relationships among student knowledge and observed behavior. BNT-SM generates and executes the code to train and test the model using the Bayes Net Toolbox [1]. Compared to the BNT code it outputs, BNT-SM reduces the number of lines of code required to use a DBN by a factor of 5. In addition to supporting more flexible models, we illustrate how to use BNT-SM to simulate Knowledge Tracing (KT) [2], an established technique for student modeling. The trained DBN does a better job of modeling and predicting student performance than the original KT code (Area Under Curve = 0.610 > 0.568), due to differences in how it estimates parameters.",
"title": ""
},
{
"docid": "b42b2496b55c67c284b0399be71e8873",
"text": "We present a method for the online calibration of a compact series elastic actuator installed in a modular snake robot. Calibration is achieved by using the measured motor current of the actuator's highly geared motor and a simple linear model for the spring's estimated torque. A heuristic is developed to identify operating conditions where motor current is an accurate estimator of output torque, even when the motor is heavily geared. This heuristic is incorporated into an unscented Kalman filter that estimates a spring constant in real-time. Using this method on a prototype module of a series elastic snake robot, we are able accurately estimate the module's output torque, even with a poor initial calibration.",
"title": ""
},
{
"docid": "f4c2a00b8a602203c86eaebc6f111f46",
"text": "Tamara Kulesa: Hello. This is Tamara Kulesa, Worldwide Marketing Manager for IBM Global Business Services for the Global Government Industry. I am here today with Susanne Dirks, Manager of the IBM Institute for Business Values Global Center for Economic Development in Ireland. Susanne is responsible for the research and writing of the newly published report, \"A Vision of Smarter Cities: How Cities Can Lead the Way into a Prosperous and Sustainable Future.\" Susanne, thank you for joining me today.",
"title": ""
},
{
"docid": "c2ed6ac38a6014db73ba81dd898edb97",
"text": "The ability of personality traits to predict important life outcomes has traditionally been questioned because of the putative small effects of personality. In this article, we compare the predictive validity of personality traits with that of socioeconomic status (SES) and cognitive ability to test the relative contribution of personality traits to predictions of three critical outcomes: mortality, divorce, and occupational attainment. Only evidence from prospective longitudinal studies was considered. In addition, an attempt was made to limit the review to studies that controlled for important background factors. Results showed that the magnitude of the effects of personality traits on mortality, divorce, and occupational attainment was indistinguishable from the effects of SES and cognitive ability on these outcomes. These results demonstrate the influence of personality traits on important life outcomes, highlight the need to more routinely incorporate measures of personality into quality of life surveys, and encourage further research about the developmental origins of personality traits and the processes by which these traits influence diverse life outcomes.",
"title": ""
},
{
"docid": "f5ad4e1901dc96de45cb191bf1869828",
"text": "The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixedlength vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the crossdomain test set (also attaining a 74.9% accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5%, which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.",
"title": ""
},
{
"docid": "206263868f70a1ce6aa734019d215a03",
"text": "This paper examines microblogging information diffusion activity during the 2011 Egyptian political uprisings. Specifically, we examine the use of the retweet mechanism on Twitter, using empirical evidence of information propagation to reveal aspects of work that the crowd conducts. Analysis of the widespread contagion of a popular meme reveals interaction between those who were \"on the ground\" in Cairo and those who were not. However, differences between information that appeals to the larger crowd and those who were doing on-the-ground work reveal important interplay between the two realms. Through both qualitative and statistical description, we show how the crowd expresses solidarity and does the work of information processing through recommendation and filtering. We discuss how these aspects of work mutually sustain crowd interaction in a politically sensitive context. In addition, we show how features of this retweet-recommendation behavior could be used in combination with other indicators to identify information that is new and likely coming from the ground.",
"title": ""
},
{
"docid": "4ea07335d42a859768565c8d88cd5280",
"text": "This paper brings together research from two different fields – user modelling and web ontologies – in attempt to demonstrate how recent semantic trends in web development can be combined with the modern technologies of user modelling. Over the last several years, a number of user-adaptive systems have been exploiting ontologies for the purposes of semantics representation, automatic knowledge acquisition, domain and user model visualisation and creation of interoperable and reusable architectural solutions. Before discussing these projects, we first overview the underlying user modelling and ontological technologies. As an example of the project employing ontology-based user modelling, we present an experiment design for translation of overlay student models for relative domains by means of ontology mapping.",
"title": ""
},
{
"docid": "ebb0828b532e8896e87ed4f365f8744a",
"text": "While much attention is given to young people’s online privacy practices on sites like Facebook, current theories of privacy fail to account for the ways in which social media alter practices of information-sharing and visibility. Traditional models of privacy are individualistic, but the realities of privacy reflect the location of individuals in contexts and networks. The affordances of social technologies, which enable people to share information about others, further preclude individual control over privacy. Despite this, social media technologies primarily follow technical models of privacy that presume individual information control. We argue that the dynamics of sites like Facebook have forced teens to alter their conceptions of privacy to account for the networked nature of social media. Drawing on their practices and experiences, we offer a model of networked privacy to explain how privacy is achieved in networked publics.",
"title": ""
},
{
"docid": "fb6d89e2faee942a0a92ded6ead0d8c7",
"text": "Each relationship has its own personality. Almost immediately after a social interaction begins, verbal and nonverbal behaviors become synchronized. Even in asocial contexts, individuals tend to produce utterances that match the grammatical structure of sentences they have recently heard or read. Three projects explore language style matching (LSM) in everyday writing tasks and professional writing. LSM is the relative use of 9 function word categories (e.g., articles, personal pronouns) between any 2 texts. In the first project, 2 samples totaling 1,744 college students answered 4 essay questions written in very different styles. Students automatically matched the language style of the target questions. Overall, the LSM metric was internally consistent and reliable across writing tasks. Women, participants of higher socioeconomic status, and students who earned higher test grades matched with targets more than others did. In the second project, 74 participants completed cliffhanger excerpts from popular fiction. Judges' ratings of excerpt-response similarity were related to content matching but not function word matching, as indexed by LSM. Further, participants were not able to intentionally increase style or content matching. In the final project, an archival study tracked the professional writing and personal correspondence of 3 pairs of famous writers across their relationships. Language matching in poetry and letters reflected fluctuations in the relationships of 3 couples: Sigmund Freud and Carl Jung, Elizabeth Barrett and Robert Browning, and Sylvia Plath and Ted Hughes. Implications for using LSM as an implicit marker of social engagement and influence are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved).",
"title": ""
},
{
"docid": "eb8f0a30d222b89e5fda3ea1d83ea525",
"text": "We present a method which exploits automatically generated scientific discourse annotations to create a content model for the summarisation of scientific articles. Full papers are first automatically annotated using the CoreSC scheme, which captures 11 contentbased concepts such as Hypothesis, Result, Conclusion etc at the sentence level. A content model which follows the sequence of CoreSC categories observed in abstracts is used to provide the skeleton of the summary, making a distinction between dependent and independent categories. Summary creation is also guided by the distribution of CoreSC categories found in the full articles, in order to adequately represent the article content. Finally, we demonstrate the usefulness of the summaries by evaluating them in a complex question answering task. Results are very encouraging as summaries of papers from automatically obtained CoreSCs enable experts to answer 66% of complex content-related questions designed on the basis of paper abstracts. The questions were answered with a precision of 75%, where the upper bound for human summaries (abstracts) was 95%.",
"title": ""
},
{
"docid": "03eb1360ba9e3e38f082099ed08469ed",
"text": "In this paper some concept of fuzzy set have discussed and one fuzzy model have applied on agricultural farm for optimal allocation of different crops by considering maximization of net benefit, production and utilization of labour . Crisp values of the objective functions obtained from selected nondominated solutions are converted into triangular fuzzy numbers and ranking of those fuzzy numbers are done to make a decision. .",
"title": ""
},
{
"docid": "404a32f89d6273a63b7ae945514655d2",
"text": "Miniaturized minimally-invasive implants with wireless power and communication links have the potential to enable closed-loop treatments and precise diagnostics. As with wireless power transfer, robust wireless communication between implants and external transceivers presents challenges and tradeoffs with miniaturization and increasing depth. Both link efficiency and available bandwidth need to be considered for communication capacity. This paper analyzes and reviews active electromagnetic and ultrasonic communication links for implants. Example transmitter designs are presented for both types of links. Electromagnetic links for mm-sized implants have demonstrated high data rates sufficient for most applications up to Mbps range; nonetheless, they have so far been limited to depths under 5 cm. Ultrasonic links, on the other hand, have shown much deeper transmission depths, but with limited data rate due to their low operating frequency. Spatial multiplexing techniques are proposed to increase ultrasonic data rates without additional power or bandwidth.",
"title": ""
},
{
"docid": "5054ad32c33dc2650c1dcee640961cd5",
"text": "Benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision (LeCun et al., 1998; Deng et al., 2009; ). The challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches, and have led to great advances of the state of the art. Even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as ImageNet (Krizhevsky et al., 2012; LeCun et al., 2015). Neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor (DVS; Lichtsteiner et al., 2008). These sensors and their DAVIS (Dynamic and Activepixel Vision Sensor) and ATIS (Asynchronous Time-based Image Sensor) derivatives (Brandli et al., 2014; Posch et al., 2014) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes. They thereby greatly reduce the amount of data to be processed, and their dynamic nature makes them a good fit for domains such as optical flow, object tracking, action recognition, or dynamic scene understanding. Compared to classical computer vision, neuromorphic vision is a younger and much smaller field of research, and lacks benchmarks, which impedes the progress of the field. To address this we introduce the largest event-based vision benchmark dataset published to date, hoping to satisfy a growing demand and stimulate challenges for the community. In particular, the availability of such benchmarks should help the development of algorithms processing event-based vision input, allowing a direct fair comparison of different approaches. We have explicitly chosen mostly dynamic vision tasks such as action recognition or tracking, which could benefit from the strengths of neuromorphic vision sensors, although algorithms that exploit these features are largely missing. A major reason for the lack of benchmarks is that currently neuromorphic vision sensors are only available as R&D prototypes. Nonetheless, there are several datasets already available; see Tan et al. (2015) for an informative review. Unlabeled DVS data was made available around 2007 in the jAER project1 and was used for development of spike timing-based unsupervised feature learning e.g., in Bichler et al. (2012). The first labeled and published event-based neuromorphic vision sensor benchmarks were created from the MNIST digit recognition dataset by jiggling the image on the screen (see Serrano-Gotarredona and Linares-Barranco, 2015 for an informative history) and later to reduce frame artifacts by jiggling the camera view with a pan-tilt unit (Orchard et al., 2015). These datasets automated the scene movement necessary to generate DVS output from the static images, and will be an important step forward for evaluating neuromorphic object recognition systems such as spiking deep networks (Pérez-Carrasco et al., 2013; O’Connor et al., 2013; Cao et al., 2014; Diehl et al., 2015), which so far have been tested mostly on static image datasets converted",
"title": ""
},
{
"docid": "49d1d7c47a52fdaf8d09053f63d225e6",
"text": "Theory of language, communicative competence, functional account of language use, discourse analysis and social-linguistic considerations have mainly made up the theoretical foundations of communicative approach to language teaching. The principles contain taking communication as the center, reflecting Real Communicating Process, avoiding Constant Error-correcting, and putting grammar at a right place.",
"title": ""
},
{
"docid": "2e0262fce0a7ba51bd5ccf9e1397b0ca",
"text": "We present a topology detection method combining smart meter sensor information and sparse line measurements. The problem is formulated as a spanning tree identification problem over a graph given partial nodal and edge power flow information. In the deterministic case of known nodal power consumption and edge power flow we provide sensor placement criterion which guarantees correct identification of all spanning trees. We then present a detection method which is polynomial in complexity to the size of the graph. In the stochastic case where loads are given by forecasts derived from delayed smart meter data, we provide a combinatorial complexity MAP detector and a polynomial complexity approximate MAP detector which is shown to work near optimum in all numerical cases.",
"title": ""
},
{
"docid": "e3ef98c0dae25c39e4000e62a348479e",
"text": "A New Framework For Hybrid Models By Coupling Latent Variables 1 User specifies p with a generative and a discriminative component and latent z p(x, y, z) = p(y|x, z) · p(x, z). The p(y|x, z), p(x, z) can be very general; they only share latent z, not parameters! 2We train both components using a multi-conditional objective α · Eq(x,y)Eq(z|x) ` (y, p(y|x, z)) } {{ } discriminative loss (`2, log) +β ·Df [q(x, z)||p(x, z)] } {{ } f-divergence (KL, JS) where q(x, y) is data distribution and α, β > 0 are hyper-parameters.",
"title": ""
}
] |
scidocsrr
|
8d04e921c18c358db5d71fb5a3d314da
|
Learning Statistical Scripts with LSTM Recurrent Neural Networks
|
[
{
"docid": "51256458513e99bf3750049d542692b8",
"text": "Text-level discourse parsing remains a challenge: most approaches employ features that fail to capture the intentional, semantic, and syntactic aspects that govern discourse coherence. In this paper, we propose a recursive model for discourse parsing that jointly models distributed representations for clauses, sentences, and entire discourses. The learned representations can to some extent learn the semantic and intentional import of words and larger discourse units automatically,. The proposed framework obtains comparable performance regarding standard discoursing parsing evaluations when compared against current state-of-art systems.",
"title": ""
}
] |
[
{
"docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b",
"text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.",
"title": ""
},
{
"docid": "8c4ece41e96c08536375e9e72dc9ddc3",
"text": "BACKGROUND\nWe present one unusual case of anophthalmia and craniofacial cleft, probably due to congenital toxoplasmosis only.\n\n\nCASE PRESENTATION\nA two-month-old male had a twin in utero who disappeared between the 7th and the 14th week of gestation. At birth, the baby presented anophthalmia and craniofacial cleft, and no sign compatible with genetic or exposition/deficiency problems, like the Wolf-Hirschhorn syndrome or maternal vitamin A deficiency. Congenital toxoplasmosis was confirmed by the presence of IgM abs and IgG neo-antibodies in western blot, as well as by real time PCR in blood. CMV infection was also discarded by PCR and IgM negative results. Structures suggestive of T. gondii pseudocysts were observed in a biopsy taken during the first functional/esthetic surgery.\n\n\nCONCLUSIONS\nWe conclude that this is a rare case of anophthalmia combined with craniofacial cleft due to congenital toxoplasmosis, that must be considered by physicians. This has not been reported before.",
"title": ""
},
{
"docid": "7b77dacb8688c3d8093ec7cbd36c55eb",
"text": "Recent end-to-end task oriented dialog systems use memory architectures to incorporate external knowledge in their dialogs. Current work makes simplifying assumptions about the structure of the knowledge base, such as the use of triples to represent knowledge, and combines dialog utterances (context) as well as knowledge base (KB) results as part of the same memory. This causes an explosion in the memory size, and makes the reasoning over memory harder. In addition, such a memory design forces hierarchical properties of the data to be fit into a triple structure of memory. This requires the memory reader to infer relationships across otherwise connected attributes. In this paper we relax the strong assumptions made by existing architectures and separate memories used for modeling dialog context and KB results. Instead of using triples to store KB results, we introduce a novel multi-level memory architecture consisting of cells for each query and their corresponding results. The multi-level memory first addresses queries, followed by results and finally each key-value pair within a result. We conduct detailed experiments on three publicly available task oriented dialog data sets and we find that our method conclusively outperforms current state-ofthe-art models. We report a 15-25% increase in both entity F1 and BLEU scores.",
"title": ""
},
{
"docid": "97799539e738e05847fbbae5dab55b49",
"text": "The advent of software agents gave rise to much discussion of just what such an agent is, and of how they differ from programs in general. Here we propose a formal definition of an autonomous agent which clearly distinguishes a software agent from just any program. We also offer the beginnings of a natural kinds taxonomy of autonomous agents, and discuss possibilities for further classification. Finally, we discuss subagents and multiagent systems.",
"title": ""
},
{
"docid": "b499ded5996db169e65282dd8b65f289",
"text": "For complex tasks, such as manipulation and robot navigation, reinforcement learning (RL) is well-known to be difficult due to the curse of dimensionality. To overcome this complexity and making RL feasible, hierarchical RL (HRL) has been suggested. The basic idea of HRL is to divide the original task into elementary subtasks, which can be learned using RL. In this paper, we propose a HRL architecture for learning robot’s movements, e.g. robot navigation. The proposed HRL consists of two layers: (i) movement planning and (ii) movement execution. In the planning layer, e.g. generating navigation trajectories, discrete RL is employed while using movement primitives. Given the movement planning and corresponding primitives, the policy for the movement execution can be learned in the second layer using continuous RL. The proposed approach is implemented and evaluated on a mobile robot platform for a",
"title": ""
},
{
"docid": "b7cc4a094988643e65d80d4989276d98",
"text": "In this paper, we describe the design and layout of an automotive radar sensor demonstrator for 77 GHz with a SiGe chipset and a fully parallel receiver architecture which is capable of digital beamforming and superresolution direction of arrival estimation methods in azimuth. Additionally, we show measurement results of this radar sensor mounted on a test vehicle.",
"title": ""
},
{
"docid": "a6f534f6d6a27b076cee44a8a188bb72",
"text": "Managing models requires extracting information from them and modifying them, and this is performed through queries. Queries can be executed at the model or at the persistence-level. Both are complementary but while model-level queries are closer to modelling engineers, persistence-level queries are specific to the persistence technology and leverage its capabilities. This paper presents MQT, an approach that translates EOL (model-level queries) to SQL (persistence-level queries) at runtime. Runtime translation provides several benefits: (i) queries are executed only when the information is required; (ii) context and metamodel information is used to get more performant translated queries; and (iii) supports translating query programs using variables and dependant queries. Translation process used by MQT is described through two examples and we also evaluate performance of the approach.",
"title": ""
},
{
"docid": "8abcf3e56e272c06da26a40d66afcfb0",
"text": "As internet use becomes increasingly integral to modern life, the hazards of excessive use are also becoming apparent. Prior research suggests that socially anxious individuals are particularly susceptible to problematic internet use. This vulnerability may relate to the perception of online communication as a safer means of interacting, due to greater control over self-presentation, decreased risk of negative evaluation, and improved relationship quality. To investigate these hypotheses, a general sample of 338 completed an online survey. Social anxiety was confirmed as a significant predictor of problematic internet use when controlling for depression and general anxiety. Social anxiety was associated with perceptions of greater control and decreased risk of negative evaluation when communicating online, however perceived relationship quality did not differ. Negative expectations during face-to-face interactions partially accounted for the relationship between social anxiety and problematic internet use. There was also preliminary evidence that preference for online communication exacerbates face-to-face avoidance.",
"title": ""
},
{
"docid": "22646672196b49cc0fde4b6c6e187fd1",
"text": "There is a tremendous increase in the research of data mining. Data mining is the process of extraction of data from large database. Knowledge Discovery in database (KDD) is another name of data mining. Privacy protection has become a necessary requirement in many data mining applications due to emerging privacy legislation and regulations. One of the most important topics in research community is Privacy Preserving Data Mining (PPDM). Privacy preserving data mining (PPDM) deals with protecting the privacy of individual data or sensitive knowledge without sacrificing the utility of the data. The Success of Privacy Preserving data mining algorithms is measured in terms of its performance, data utility, level of uncertainty or resistance to data mining algorithms etc. In this paper we will review on various privacy preserving techniques like Data perturbation, condensation etc.",
"title": ""
},
{
"docid": "f6fcb8061edd683c91d974444d409896",
"text": "<i>Garbage-First</i> is a server-style garbage collector, targeted for multi-processors with large memories, that meets a soft real-time goal with high probability, while achieving high throughput. Whole-heap operations, such as global marking, are performed concurrently with mutation, to prevent interruptions proportional to heap or live-data size. Concurrent marking both provides collection \"completeness\" and identifies regions ripe for reclamation via compacting evacuation. This evacuation is performed in parallel on multiprocessors, to increase throughput.",
"title": ""
},
{
"docid": "e36e26f084c0f589e5d36bb2103106ff",
"text": "Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem between a labelled source dataset and an unlabelled target dataset, recent papers [11, 33] have shown promising results by fine-tuning the networks with domain adaptation loss functions which try to align the mismatch between the training and testing data distributions. Nevertheless, these recent deep learning based domain adaptation approaches still suffer from issues such as high sensitivity to the gradient reversal hyperparameters [11] and overfitting during the fine-tuning stage. In this paper, we propose a unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin.",
"title": ""
},
{
"docid": "5632d79f37b4bc774cd3bdf7f1cd5c71",
"text": "Switching devices based on wide band gap materials as SiC offer a significant performance improvement on the switch level compared to Si devices. A well known example are SiC diodes employed e.g. in PFC converters. In this paper, the impact on the system level performance, i.e. efficiency/power density, of a PFC and of a DC-DC converter resulting with the new SiC devices is evaluated based on analytical optimisation procedures and prototype systems. There, normally-on JFETs by SiCED and normally-off JFETs by SemiSouth are considered.",
"title": ""
},
{
"docid": "63cf9ef326bbe39aa1ecc86b6b1cb0ce",
"text": "Drug delivery systems (DDS) have become important tools for the specific delivery of a large number of drug molecules. Since their discovery in the 1960s liposomes were recognized as models to study biological membranes and as versatile DDS of both hydrophilic and lipophilic molecules. Liposomes--nanosized unilamellar phospholipid bilayer vesicles--undoubtedly represent the most extensively studied and advanced drug delivery vehicles. After a long period of research and development efforts, liposome-formulated drugs have now entered the clinics to treat cancer and systemic or local fungal infections, mainly because they are biologically inert and biocompatible and practically do not cause unwanted toxic or antigenic reactions. A novel, up-coming and promising therapy approach for the treatment of solid tumors is the depletion of macrophages, particularly tumor associated macrophages with bisphosphonate-containing liposomes. In the advent of the use of genetic material as therapeutic molecules the development of delivery systems to target such novel drug molecules to cells or to target organs becomes increasingly important. Liposomes, in particular lipid-DNA complexes termed lipoplexes, compete successfully with viral gene transfection systems in this field of application. Future DDS will mostly be based on protein, peptide and DNA therapeutics and their next generation analogs and derivatives. Due to their versatility and vast body of known properties liposome-based formulations will continue to occupy a leading role among the large selection of emerging DDS.",
"title": ""
},
{
"docid": "38a5b1d2e064228ec498cf64d29d80e5",
"text": "Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks.",
"title": ""
},
{
"docid": "e28ab50c2d03402686cc9a465e1231e7",
"text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.",
"title": ""
},
{
"docid": "16a384727d6a323437a0b6ed3cdcc230",
"text": "The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but in practice requires fixed-size inputs making it impractical for production systems where class sizes can vary. This impedes training and the final utility of few-shot learning systems. This paper describes an approach to constructing and training a network that can handle arbitrary example sizes dynamically as the system is used.",
"title": ""
},
{
"docid": "fceecabcbcbd4786adc755370d8eb635",
"text": "A Halbach array permanent magnet spherical motor (HPMSM) can provide 3 degrees-of-freedom motion in a single joint, simplify the mechanical structure greatly, improve the positioning precision and response speed. However, a HPMSM is a multivariable, nonlinear and strong coupling system with serious inter-axis nonlinear coupling. The dynamic model of a HPMSM is described in this paper, and a control algorithm based on computed torque method is proposed to realize the dynamic decoupling control of the HPMSM. Simulations results indicate that this algorithm can make the system track continues trajectory ideally and eliminate the influences of inter-axis nonlinear coupling effectively to achieve a good control performance.",
"title": ""
},
{
"docid": "78f5714742827af79a93d6596f3decc9",
"text": "In Alzheimer's disease (AD), β-amyloid (Aβ) plaques are tightly enveloped by microglia processes, but the significance of this phenomenon is unknown. Here we show that microglia constitute a barrier with profound impact on plaque composition and toxicity. Using high-resolution confocal and in vivo two-photon imaging in AD mouse models, we demonstrate that this barrier prevents outward plaque expansion and leads to compact plaque microregions with low Aβ42 affinity. Areas uncovered by microglia are less compact but have high Aβ42 affinity, leading to the formation of protofibrillar Aβ42 hotspots that are associated with more severe axonal dystrophy. In ageing, microglia coverage is reduced leading to enlarged protofibrillar Aβ42 hotspots and more severe neuritic dystrophy. CX3CR1 gene deletion or anti-Aβ immunotherapy causes expansion of microglia coverage and reduced neuritic dystrophy. Failure of the microglia barrier and the accumulation of neurotoxic protofibrillar Aβ hotspots may constitute novel therapeutic and clinical imaging targets for AD.",
"title": ""
},
{
"docid": "cb29a1fc5a8b70b755e934c9b3512a36",
"text": "The problem of pedestrian detection in image and video frames has been extensively investigated in the past decade. However, the low performance in complex scenes shows that it remains an open problem. In this paper, we propose to cascade simple Aggregated Channel Features (ACF) and rich Deep Convolutional Neural Network (DCNN) features for efficient and effective pedestrian detection in complex scenes. The ACF based detector is used to generate candidate pedestrian windows and the rich DCNN features are used for fine classification. Experiments show that the proposed approach achieved leading performance in the INRIA dataset and comparable performance to the state-of-the-art in the Caltech and ETH datasets.",
"title": ""
},
{
"docid": "e35194cb3fdd3edee6eac35c45b2da83",
"text": "The availability of high-resolution Digital Surface Models of coastal environments is of increasing interest for scientists involved in the study of the coastal system processes. Among the range of terrestrial and aerial methods available to produce such a dataset, this study tests the utility of the Structure from Motion (SfM) approach to low-altitude aerial imageries collected by Unmanned Aerial Vehicle (UAV). The SfM image-based approach was selected whilst searching for a rapid, inexpensive, and highly automated method, able to produce 3D information from unstructured aerial images. In particular, it was used to generate a dense point cloud and successively a high-resolution Digital Surface Models (DSM) of a beach dune system in Marina di Ravenna (Italy). The quality of the elevation dataset produced by the UAV-SfM was initially evaluated by comparison with point cloud generated by a Terrestrial Laser Scanning (TLS) surveys. Such a comparison served to highlight an average difference in the vertical values of 0.05 m (RMS = 0.19 m). However, although the points cloud comparison is the best approach to investigate the absolute or relative correspondence between UAV and TLS OPEN ACCESS Remote Sens. 2013, 5 6881 methods, the assessment of geomorphic features is usually based on multi-temporal surfaces analysis, where an interpolation process is required. DSMs were therefore generated from UAV and TLS points clouds and vertical absolute accuracies assessed by comparison with a Global Navigation Satellite System (GNSS) survey. The vertical comparison of UAV and TLS DSMs with respect to GNSS measurements pointed out an average distance at cm-level (RMS = 0.011 m). The successive point by point direct comparison between UAV and TLS elevations show a very small average distance, 0.015 m, with RMS = 0.220 m. Larger values are encountered in areas where sudden changes in topography are present. The UAV-based approach was demonstrated to be a straightforward one and accuracy of the vertical dataset was comparable with results obtained by TLS technology.",
"title": ""
}
] |
scidocsrr
|
ec2261c87ecb5057b95386bce090afa9
|
Joint auto-encoders: a flexible multi-task learning framework
|
[
{
"docid": "7456842efeebb480c21974f78aea2a9f",
"text": "Connectionist networks that have learned one task can be reused on related tasks in a process that is called \"transfer\". This paper surveys recent work on transfer. A number of distinctions between kinds of transfer are identified, and future directions for research are explored. The study of transfer has a long history in cognitive science. Discoveries about transfer in human cognition can inform applied efforts. Advances in applications can also inform cognitive studies.",
"title": ""
}
] |
[
{
"docid": "958b0739c5c2d65bbb1cf0b7687610ff",
"text": "BACKGROUND\nDexlansoprazole is a new proton pump inhibitor (PPI) with a dual delayed-release system. Both dexlansoprazole and esomeprazole are an enantiomer of lansoprazole and omeprazole respectively. However, there is no head-to-head trial data or indirect comparison analyses between dexlansoprazole and esomeprazole.\n\n\nAIM\nTo compare the efficacy of dexlansoprazole with esomeprazole in healing erosive oesophagitis (EO), the maintenance of healed EO and the treatment of non-erosive reflux disease (NERD).\n\n\nMETHODS\nRandomised Controlled Trials (RCTs) comparing dexlansoprazole or esomeprazole with either placebo or another PPI were systematically reviewed. Random-effect meta-analyses and adjusted indirect comparisons were conducted to compare the treatment effect of dexlansoprazole and esomeprazole using a common comparator. The relative risk (RR) and 95% confidence interval (CI) were calculated.\n\n\nRESULTS\nThe indirect comparisons revealed significant differences in symptom control of heartburn in patients with NERD at 4 weeks. Dexlansoprazole 30 mg was more effective than esomeprazole 20 mg or 40 mg (RR: 2.01, 95% CI: 1.15-3.51; RR: 2.17, 95% CI: 1.39-3.38). However, there were no statistically significant differences between the two drugs in EO healing and maintenance of healed EO. Comparison of symptom control in healed EO was not able to be made due to different definitions used in the RCTs.\n\n\nCONCLUSIONS\nAdjusted indirect comparisons based on currently available RCT data suggested significantly better treatment effect in symptom control of heartburn in patients with NERD for dexlansoprazole against esomeprazole. No statistically significant differences were found in other EO outcomes. However, these study findings need to be interpreted with caution due to small number of studies and other limitations.",
"title": ""
},
{
"docid": "b766fe26da9106d65a72b564594e28e6",
"text": "The thalamus has long been seen as responsible for relaying information on the way to the cerebral cortex, but it has not been until the last decade or so that the functional nature of this relay has attracted significant attention. Whereas earlier views tended to relegate thalamic function to a simple, machine-like relay process, recent research, reviewed in this article, demonstrates complicated circuitry and a rich array of membrane properties underlying the thalamic relay. It is now clear that the thalamic relay does not have merely a trivial function. Suggestions that the thalamic circuits and cell properties only come into play during certain phases of sleep to effectively disconnect the relay are correct as far as they go, but they are incomplete, because they fail to take into account interesting and variable properties of the relay that, we argue, occur during normal waking behavior. Although the specific function of the circuits and cellular properties of the thalamic relay for waking behavior is far from clear, we offer two related hypotheses based on recent experimental evidence. One is that the thalamus is not used just to relay peripheral information from, for example, visual, auditory, or cerebellar inputs, but that some thalamic nuclei are arranged instead to relay information from one cortical area to another. The second is that the thalamus is not a simple, passive relay of information to cortex but instead is involved in many dynamic processes that significantly alter the nature of the information relayed to cortex.",
"title": ""
},
{
"docid": "1656c30461306705c69f79c7701e89b8",
"text": "Conformal geometry is at the core of pure mathematics. Conformal structure is more flexible than Riemaniann metric but more rigid than topology. Conformal geometric methods have played important roles in engineering fields. This work introduces a theoretically rigorous and practically efficient method for computing Riemannian metrics with prescribed Gaussian curvatures on discrete surfaces—discrete surface Ricci flow, whose continuous counter part has been used in the proof of Poincaré conjecture. Continuous Ricci flow conformally deforms a Riemannian metric on a smooth surface such that the Gaussian curvature evolves like a heat diffusion process. Eventually, the Gaussian curvature becomes constant and the limiting Riemannian metric is conformal to the original one. In the discrete case, surfaces are represented as piecewise linear triangle meshes. Since the Riemannian metric and the Gaussian curvature are discretized as the edge lengths and the angle deficits, the discrete Ricci flow can be defined as the deformation of edge lengths driven by the discrete curvature. The existence and uniqueness of the solution and the convergence of the flow process are theoretically proven, and numerical algorithms to compute Riemannian metrics with prescribed Gaussian curvatures using discrete Ricci flow are also designed. Discrete Ricci flow has broad applications in graphics, geometric modeling, and medical imaging, such as surface parameterization, surface matching, manifold splines, and construction of geometric structures on general surfaces.",
"title": ""
},
{
"docid": "f0a7f1f36c10cdd84f88f5e1c266f78d",
"text": "We connect a broad class of generative models through their shared reliance on sequential decision making. Motivated by this view, we develop extensions to an existing model, and then explore the idea further in the context of data imputation – perhaps the simplest setting in which to investigate the relation between unconditional and conditional generative modelling. We formulate data imputation as an MDP and develop models capable of representing effective policies for it. We construct the models using neural networks and train them using a form of guided policy search [11]. Our models generate predictions through an iterative process of feedback and refinement. We show that this approach can learn effective policies for imputation problems of varying difficulty and across multiple datasets.",
"title": ""
},
{
"docid": "6adfcf6aec7b33a82e3e5e606c93295d",
"text": "Cyber security is a serious global concern. The potential of cyber terrorism has posed a threat to national security; meanwhile the increasing prevalence of malware and incidents of cyber attacks hinder the utilization of the Internet to its greatest benefit and incur significant economic losses to individuals, enterprises, and public organizations. This paper presents some recent advances in intrusion detection, feature selection, and malware detection. In intrusion detection, stealthy and low profile attacks that include only few carefully crafted packets over an extended period of time to delude firewalls and the intrusion detection system (IDS) have been difficult to detect. In protection against malware (trojans, worms, viruses, etc.), how to detect polymorphic and metamorphic versions of recognized malware using static scanners is a great challenge. We present in this paper an agent based IDS architecture that is capable of detecting probe attacks at the originating host and denial of service (DoS) attacks at the boundary controllers. We investigate and compare the performance of different classifiers implemented for intrusion detection purposes. Further, we study the performance of the classifiers in real-time detection of probes and DoS attacks, with respect to intrusion data collected on a real operating network that includes a variety of simulated attacks. Feature selection is as important for IDS as it is for many other modeling problems. We present several techniques for feature selection and compare their performance in the IDS application. It is demonstrated that, with appropriately chosen features, both probes and DoS attacks can be detected in real time or near real time at the originating host or at the boundary controllers. We also briefly present some encouraging recent results in detecting polymorphic and metamorphic malware with advanced static, signature-based scanning techniques.",
"title": ""
},
{
"docid": "9c24c2372ffd9526ee5c80c69685d01f",
"text": "This work explores the use of tow steered composite laminates, functionally graded metals (FGM), thickness distributions, and curvilinear rib/spar/stringer topologies for aeroelastic tailoring. Parameterized models of the Common Research Model (CRM) wing box have been developed for passive aeroelastic tailoring trade studies. Metrics of interest include the wing weight, the onset of dynamic flutter, and the static aeroelastic stresses. Compared to a baseline structure, the lowest aggregate static wing stresses could be obtained with tow steered skins (47% improvement), and many of these designs could reduce weight as well (up to 14%). For these structures, the trade-off between flutter speed and weight is generally strong, although one case showed both a 100% flutter improvement and a 3.5% weight reduction. Material grading showed no benefit in the skins, but moderate flutter speed improvements (with no weight or stress increase) could be obtained by grading the spars (4.8%) or ribs (3.2%), where the best flutter results were obtained by grading both thickness and material. For the topology work, large weight reductions were obtained by removing an inner spar, and performance was maintained by shifting stringers forward and/or using curvilinear ribs: 5.6% weight reduction, a 13.9% improvement in flutter speed, but a 3.0% increase in stress levels. Flutter resistance was also maintained using straightrotated ribs although the design had a 4.2% lower flutter speed than the curved ribs of similar weight and stress levels were higher. These results will guide the development of a future design optimization scheme established to exploit and combine the individual attributes of these technologies.",
"title": ""
},
{
"docid": "99ed46c953a7a00e6d9a5dbd214cae77",
"text": "A number of important problems in theoretical computer science and machine learning can be interpreted as recovering a certain basis. These include certain tensor decompositions, Independent Component Analysis (ICA), spectral clustering and Gaussian mixture learning. Each of these problems reduces to an instance of our general model, which we call a “Basis Encoding Function” (BEF). We show that learning a basis within this model can then be provably and efficiently achieved using a first order iteration algorithm (gradient iteration). Our algorithm goes beyond tensor methods, providing a function-based generalization for a number of existing methods including the classical matrix power method, the tensor power iteration as well as cumulant-based FastICA. Our framework also unifies the unusual phenomenon observed in these domains that they can be solved using efficient non-convex optimization. Specifically, we describe a class of BEFs such that their local maxima on the unit sphere are in one-to-one correspondence with the basis elements. This description relies on a certain “hidden convexity” property of these functions. We provide a complete theoretical analysis of gradient iteration even when the BEF is perturbed. We show convergence and complexity bounds polynomial in dimension and other relevant parameters, such as perturbation size. Our perturbation results can be considered as a non-linear version of the classical Davis-Kahan theorem for perturbations of eigenvectors of symmetric matrices. In addition we show that our algorithm exhibits fast (superlinear) convergence and relate the speed of convergence to the properties of the BEF. Moreover, the gradient iteration algorithm can be easily and efficiently implemented in practice. Finally we apply our framework by providing the first provable algorithm for recovery in a general perturbed ICA model. ar X iv :1 41 1. 14 20 v3 [ cs .L G ] 3 N ov 2 01 5",
"title": ""
},
{
"docid": "d480b887cfaec89a20b329332438e86d",
"text": "Modern cryptocurrencies exploit decentralised blockchains to record a public and unalterable history of transactions. Besides transactions, further information is stored for different, and often undisclosed, purposes, making the blockchains a rich and increasingly growing source of valuable information, in part of difficult interpretation. Many data analytics have been developed, mostly based on specifically designed and ad-hoc engineered approaches. We propose a general-purpose framework, seamlessly supporting data analytics on both Bitcoin and Ethereum --- currently the two most prominent cryptocurrencies. Such a framework allows us to integrate relevant blockchain data with data from other sources, and to organise them in a database, either SQL or NoSQL. Our framework is released as an open-source Scala library. We illustrate the distinguishing features of our approach on a set of significant use cases, which allow us to empirically compare ours to other competing proposals, and evaluate the impact of the database choice on scalability.",
"title": ""
},
{
"docid": "ba50550de9920eb3c40da0550663dd32",
"text": "Bile acids are important signaling molecules that regulate cholesterol, glucose, and energy homoeostasis and have thus been implicated in the development of metabolic disorders. Their bioavailability is strongly modulated by the gut microbiota, which contributes to generation of complex individual-specific bile acid profiles. Hence, it is important to have accurate methods at hand for precise measurement of these important metabolites. Here, a rapid and sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for simultaneous identification and quantitation of primary and secondary bile acids as well as their taurine and glycine conjugates was developed and validated. Applicability of the method was demonstrated for mammalian tissues, biofluids, and cell culture media. The analytical approach mainly consists of a simple and rapid liquid-liquid extraction procedure in presence of deuterium-labeled internal standards. Baseline separation of all isobaric bile acid species was achieved and a linear correlation over a broad concentration range was observed. The method showed acceptable accuracy and precision on intra-day (1.42-11.07 %) and inter-day (2.11-12.71 %) analyses and achieved good recovery rates for representative analytes (83.7-107.1 %). As a proof of concept, the analytical method was applied to mouse tissues and biofluids, but especially to samples from in vitro fermentations with gut bacteria of the family Coriobacteriaceae. The developed method revealed that the species Eggerthella lenta and Collinsella aerofaciens possess bile salt hydrolase activity, and for the first time that the species Enterorhabdus mucosicola is able to deconjugate and dehydrogenate primary bile acids in vitro.",
"title": ""
},
{
"docid": "80b514540933a9cc31136c8cb86ec9b3",
"text": "We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.",
"title": ""
},
{
"docid": "c8cd0c0ebd38b3e287d6e6eed965db6b",
"text": "Goalball, one of the official Paralympic events, is popular with visually impaired people all over the world. The purpose of goalball is to throw the specialized ball, with bells inside it, to the goal line of the opponents as many times as possible while defenders try to block the thrown ball with their bodies. Since goalball players cannot rely on visual information, they need to grasp the game situation using their auditory sense. However, it is hard, especially for beginners, to perceive the direction and distance of the thrown ball. In addition, they generally tend to be afraid of the approaching ball because, without visual information, they could be hit by a high-speed ball. In this paper, our goal is to develop an application called GoalBaural (Goalball + aural) that enables goalball players to improve the recognizability of the direction and distance of a thrown ball without going onto the court and playing goalball. The evaluation result indicated that our application would be efficient in improving the speed and the accuracy of locating the balls.",
"title": ""
},
{
"docid": "b0815caebe9373220195ac3b143abeca",
"text": "This paper presents the motivation, basis and a prototype implementation of an ethical adaptor capable of using a moral affective function, guilt, as a basis for altering a robot's ongoing behavior. While the research is illustrated in the context of the battlefield, the methods described are believed generalizable to other domains such as eldercare and are potentially extensible to a broader class of moral emotions, including compassion and empathy.",
"title": ""
},
{
"docid": "7f3a97cb9d269c85fae209fae382ae8c",
"text": "A compliant 2×2 tactile sensor array was developed and investigated for roughness encoding. State of the art cross shape 3D MEMS sensors were integrated with polymeric packaging providing in total 16 sensitive elements to external mechanical stimuli in an area of about 20 mm(2), similarly to the SA1 innervation density in humans. Experimental analysis of the bio-inspired tactile sensor array was performed by using ridged surfaces, with spatial periods from 2.6 mm to 4.1 mm, which were indented with regulated 1N normal force and stroked at constant sliding velocity from 15 mm/s to 48 mm/s. A repeatable and expected frequency shift of the sensor outputs depending on the applied stimulus and on its scanning velocity was observed between 3.66 Hz and 18.46 Hz with an overall maximum error of 1.7%. The tactile sensor could also perform contact imaging during static stimulus indentation. The experiments demonstrated the suitability of this approach for the design of a roughness encoding tactile sensor for an artificial fingerpad.",
"title": ""
},
{
"docid": "7166818699caa1ea981724c8bb550a1c",
"text": "Recommender systems help users locate possible items of interest more quickly by filtering and ranking them in a personalized way. Some of these systems provide the end user not only with such a personalized item list but also with an explanation which describes why a specific item is recommended and why the system supposes that the user will like it. Besides helping the user understand the output and rationale of the system, the provision of such explanations can also improve the general acceptance, perceived quality, or effectiveness of the system. In recent years, the question of how to automatically generate and present system-side explanations has attracted increased interest in research. Today some basic explanation facilities are already incorporated in e-commerce Web sites such as Amazon.com. In this work, we continue this line of recent research and address the question of how explanations can be communicated to the user in a more effective way. In particular, we present the results of a user study in which users of a recommender system were provided with different types of explanation. We experimented with ten different explanation types and measured their effects in different dimensions. The explanation types used in the study include both known visualizations from the literature as well as two novel interfaces based on tag clouds. Our study reveals that the content-based tag cloud explanations are particularly helpful to increase the user-perceived level of transparency and to increase user satisfaction even though they demand higher cognitive effort from the user. Based on these insights and observations, we derive a set of possible guidelines for designing or selecting suitable explanations for recommender systems.",
"title": ""
},
{
"docid": "def70ea18a746ead1c558c18c9cb1dcc",
"text": "Artificial Intelligence: Foundations of Computational Agents is about the science of artificial intelligence (AI). It presents AI as the study of the design of intelligent computational agents. The book is structured as a textbook, but it is accessible to a wide audience of professionals and researchers. The past decades have witnessed the emergence of AI as a serious science and engineering discipline. This book provides the first accessible synthesis of the field aimed at undergraduate and graduate students. It provides a coherent vision of the foundations of the field as it is today, in terms of a multidimensional design space that has been partially explored. As with any science worth its salt, AI has a coherent, formal theory and a rambunctious experimental wing. The book balances theory and experiment, showing how to link them intimately together. It develops the science of AI together with its engineering applications.",
"title": ""
},
{
"docid": "c17e30a9d85c6ac776bdfc80e9283e30",
"text": "Much of estimation of human internal state (goal, intentions, activities, preferences, etc.) is passive: an algorithm observes human actions and updates its estimate of human state. In this work, we embrace the fact that robot actions affect what humans do, and leverage it to improve state estimation. We enable robots to do active information gathering, by planning actions that probe the user in order to clarify their internal state. For instance, an autonomous car will plan to nudge into a human driver's lane to test their driving style. Results in simulation and in a user study suggest that active information gathering significantly outperforms passive state estimation.",
"title": ""
},
{
"docid": "5d34943b8456bbab86adae07392dcca2",
"text": "BACKGROUND\nA key component of many asthma management guidelines is the recommendation for patient education and regular medical review. A number of controlled trials have been conducted to measure the effectiveness of asthma education programmes. These programmes improve patient knowledge, but their impact on health outcomes is less well established. This review was conducted to examine the strength of evidence supporting Step 6 of the Australian Asthma Management Plan: \"Educate and Review Regularly\"; to test whether health outcomes are influenced by education and self-management programmes.\n\n\nOBJECTIVES\nThe objective of this review was to assess the effects of asthma self-management programmes, when coupled with regular health practitioner review, on health outcomes in adults with asthma.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Airways Group trials register and reference lists of articles.\n\n\nSELECTION CRITERIA\nRandomised trials of self-management education in adults over 16 years of age with asthma.\n\n\nDATA COLLECTION AND ANALYSIS\nTrial quality was assessed and data were extracted independently by two reviewers. Study authors were contacted for confirmation.\n\n\nMAIN RESULTS\nTwenty-five trials were included. Self-management education was compared with usual care in 22 studies. Self-management education reduced hospitalisations (odds ratio 0.57, 95% confidence interval 0.38 to 0.88); emergency room visits (odds ratio 0.71, 95% confidence interval (0.57 to 0.90); unscheduled visits to the doctor (odds ratio 0.57, 95% confidence interval 0.40 to 0.82); days off work or school (odds ratio 0.55, 95% confidence interval 0.38 to 0. 79); and nocturnal asthma (odds ratio 0.53, 95% confidence interval 0.39 to 0.72). Measures of lung function were little changed. Self-management programmes that involved a written action plan showed a greater reduction in hospitalisation than those that did not (odds ratio 0.35, 95% confidence interval 0.18 to 0.68). People who managed their asthma by self-adjustment of their medications using an individualised written plan had better lung function than those whose medications were adjusted by a doctor.\n\n\nREVIEWER'S CONCLUSIONS\nTraining in asthma self-management which involves self-monitoring by either peak expiratory flow or symptoms, coupled with regular medical review and a written action plan appears to improve health outcomes for adults with asthma. Training programmes which enable people to adjust their medication using a written action plan appear to be more effective than other forms of asthma self-management.",
"title": ""
},
{
"docid": "cea9c1bab28363fc6f225b7843b8df99",
"text": "Published in Agron. J. 104:1336–1347 (2012) Posted online 29 June 2012 doi:10.2134/agronj2012.0065 Copyright © 2012 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. T leaf area index (LAI), the ratio of leaf area to ground area, typically reported as square meters per square meter, is a commonly used biophysical characteristic of vegetation (Watson, 1947). The LAI can be subdivided into photosynthetically active and photosynthetically inactive components. The former, the gLAI, is a metric commonly used in climate (e.g., Buermann et al., 2001), ecological (e.g., Bulcock and Jewitt, 2010), and crop yield (e.g., Fang et al., 2011) models. Because of its wide use and applicability to modeling, there is a need for a nondestructive remote estimation of gLAI across large geographic areas. Various techniques based on remotely sensed data have been utilized for assessing gLAI (see reviews by Pinter et al., 2003; Hatfield et al., 2004, 2008; Doraiswamy et al., 2003; le Maire et al., 2008, and references therein). Vegetation indices, particularly the NDVI (Rouse et al., 1974) and SR (Jordan, 1969), are the most widely used. The NDVI, however, is prone to saturation at moderate to high gLAI values (Kanemasu, 1974; Curran and Steven, 1983; Asrar et al., 1984; Huete et al., 2002; Gitelson, 2004; Wu et al., 2007; González-Sanpedro et al., 2008) and requires reparameterization for different crops and species. The saturation of NDVI has been attributed to insensitivity of reflectance in the red region at moderate to high gLAI values due to the high absorption coefficient of chlorophyll. For gLAI below 3 m2/m2, total absorption by a canopy in the red range reaches 90 to 95%, and further increases in gLAI do not bring additional changes in absorption and reflectance (Hatfield et al., 2008; Gitelson, 2011). Another reason for the decrease in the sensitivity of NDVI to moderate to high gLAI values is the mathematical formulation of that index. At moderate to high gLAI, the NDVI is dominated by nearinfrared (NIR) reflectance. Because scattering by the cellular or leaf structure causes the NIR reflectance to be high and the absorption by chlorophyll causes the red reflectance to be low, NIR reflectance is considerably greater than red reflectance: e.g., for gLAI >3 m2/m2, NIR reflectance is >40% while red reflectance is <5%. Thus, NDVI becomes insensitive to changes in both red and NIR reflectance. Other commonly used VIs include the Enhanced Vegetation Index, EVI (Liu and Huete, 1995; Huete et al., 1997, 2002), its ABStrAct",
"title": ""
},
{
"docid": "f77a235f49cc8b0c037eb0c528b2c9dc",
"text": "This paper describes the museum wearable: a wearable computer which orchestrates an audiovisual narration as a function of the visitor’s interests gathered from his/her physical path in the museum and length of stops. The wearable is made by a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small, lightweight eye-piece display (often called private-eye) attached to conventional headphones. Using custom built infrared location sensors distributed in the museum space, and statistical mathematical modeling, the museum wearable builds a progressively refined user model and uses it to deliver a personalized audiovisual narration to the visitor. This device will enrich and personalize the museum visit as a visual and auditory storyteller that is able to adapt its story to the audience’s interests and guide the public through the path of the exhibit.",
"title": ""
},
{
"docid": "c36fa18fc7c0a374c2003cfcf95c632c",
"text": "Extreme multi-label classification (XMLC) is a problem of tagging an instance with a small subset of relevant labels chosen from an extremely large pool of possible labels. Large label spaces can be efficiently handled by organizing labels as a tree, like in the hierarchical softmax (HSM) approach commonly used for multi-class problems. In this paper, we investigate probabilistic label trees (PLTs) that have been recently devised for tackling XMLC problems. We show that PLTs are a no-regret multi-label generalization of HSM when precision@k is used as a model evaluation metric. Critically, we prove that pick-one-label heuristic—a reduction technique from multi-label to multi-class that is routinely used along with HSM—is not consistent in general. We also show that our implementation of PLTs, referred to as EXTREMETEXT (XT), obtains significantly better results than HSM with the pick-one-label heuristic and XML-CNN, a deep network specifically designed for XMLC problems. Moreover, XT is competitive to many state-of-the-art approaches in terms of statistical performance, model size and prediction time which makes it amenable to deploy in an online system.",
"title": ""
}
] |
scidocsrr
|
3c0dd8a974108c66a96b721e34450223
|
Cartesian impedance control of redundant manipulators for human-robot co-manipulation
|
[
{
"docid": "56316a77e260d8122c4812d684f4d223",
"text": "Manipulation fundamentally requires a manipulator to be mechanically coupled to the object being manipulated. A consideration of the physical constraints imposed by dynamic interaction shows that control of a vector quantity such as position or force is inadequate and that control of the manipulator impedance is also necessary. Techniques for control of manipulator behaviour are presented which result in a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance.",
"title": ""
},
{
"docid": "aab75b349485fe8a626b9d6dad286b0f",
"text": "Impedance and Admittance Control are two distinct implementations of the same control goal. It is well known that their stability and performance properties are complementary. In this paper, we present a hybrid system approach, which incorporates Impedance and Admittance Control as two extreme cases of one family of controllers. This approach allows to continuously switch and interpolate between Impedance and Admittance Control. We compare the basic stability and performance properties of the resulting controllers by means of an extensive case study of a one-dimensional system and present an experimental evaluation using the KUKA-DLR-lightweight arm.",
"title": ""
}
] |
[
{
"docid": "93c9751cda2db3aa44e732abdf4bc82e",
"text": "The current study was motivated by a need for a self-report questionnaire that assesses a broad range of subthreshold autism traits, is brief and easily administered, and is relevant to the general population. An initial item pool was administered to 1,709 students. Structural validity analysis resulted in a 24-item questionnaire termed the Subthreshold Autism Trait Questionnaire (SATQ; Cronbach's alpha coefficient = .73, test-retest reliability = .79). An exploratory factor analysis suggested 5 factors. Confirmatory factor analysis indicated the 5 factor solution was an adequate fit and outperformed two other models. The SATQ successfully differentiated between an ASD and student group and demonstrated convergent validity with other ASD measures. Thus, the current study introduces and provides initial psychometric support for the SATQ.",
"title": ""
},
{
"docid": "7deac3cbb3a30914412db45f69fb27f1",
"text": "This paper presents the design, numerical analysis and measurements of a planar bypass balun that provides 1:4 impedance transformations between the unbalanced microstrip (MS) and balanced coplanar strip line (CPS). This type of balun is suitable for operation with small antennas fed with balanced a (parallel wire) transmission line, i.e. wire, planar dipoles and loop antennas. The balun has been applied to textile CPS-fed loop antennas, designed for operations below 1GHz. The performance of a loop antenna with the balun is described, as well as an idea of incorporating rigid circuits with flexible textile structures.",
"title": ""
},
{
"docid": "d049a1779a8660f689f1da5daada69dc",
"text": "Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.",
"title": ""
},
{
"docid": "4313c87376e6ea9fac7dc32f359c2ae9",
"text": "Game engines are specialized middleware which facilitate rapid game development. Until now they have been highly optimized to extract maximum performance from single processor hardware. In the last couple of years improvements in single processor hardware have approached physical limits and performance gains have slowed to become incremental. As a consequence, improvements in game engine performance have also become incremental. Currently, hardware manufacturers are shifting to dual and multi-core processor architectures, and the latest game consoles also feature multiple processors. This presents a challenge to game engine developers because of the unfamiliarity and complexity of concurrent programming. The next generation of game engines must address the issues of concurrency if they are to take advantage of the new hardware. This paper discusses the issues, approaches, and tradeoffs that need to be considered in the design of a multi-threaded game engine.",
"title": ""
},
{
"docid": "309dee96492cf45ed2887701b27ad3ee",
"text": "The objective of a systematic review is to obtain empirical evidence about the topic under review and to allow moving forward the body of knowledge of a discipline. Therefore, systematic reviewing is a tool we can apply in Software Engineering to develop well founded guidelines with the final goal of improving the quality of the software systems. However, we still do not have as much experience in performing systematic reviews as in other disciplines like medicine, and therefore we need detailed guidance. This paper presents a proposal of a improved process to perform systematic reviews in software engineering. This process is the result of the tasks carried out in a first review and a subsequent update concerning the effectiveness of elicitation techniques.",
"title": ""
},
{
"docid": "1d8a8f6f95a729a44486f89ffb07b63a",
"text": "MicroRNAs are short, noncoding RNA transcripts that post-transcriptionally regulate gene expression. Several hundred microRNA genes have been identified in Caenorhabditis elegans, Drosophila, plants and mammals. MicroRNAs have been linked to developmental processes in C. elegans, plants and humans and to cell growth and apoptosis in Drosophila. A major impediment in the study of microRNA function is the lack of quantitative expression profiling methods. To close this technological gap, we have designed dual-channel microarrays that monitor expression levels of 124 mammalian microRNAs. Using these tools, we observed distinct patterns of expression among adult mouse tissues and embryonic stem cells. Expression profiles of staged embryos demonstrate temporal regulation of a large class of microRNAs, including members of the let-7 family. This microarray technology enables comprehensive investigation of microRNA expression, and furthers our understanding of this class of recently discovered noncoding RNAs.",
"title": ""
},
{
"docid": "1904d8b3c45bc24acdc0294d84d66c79",
"text": "The propagation of unreliable information is on the rise in many places around the world. This expansion is facilitated by the rapid spread of information and anonymity granted by the Internet. The spread of unreliable information is a well-studied issue and it is associated with negative social impacts. In a previous work, we have identified significant differences in the structure of news articles from reliable and unreliable sources in the US media. Our goal in this work was to explore such differences in the Brazilian media. We found significant features in two data sets: one with Brazilian news in Portuguese and another one with US news in English. Our results show that features related to the writing style were prominent in both data sets and, despite the language difference, some features have a universal behavior, being significant to both US and Brazilian news articles. Finally, we combined both data sets and used the universal features to build a machine learning classifier to predict the source type of a news article as reliable or unreliable.",
"title": ""
},
{
"docid": "ff8f909eb2212a032781c795ee483954",
"text": "We investigate the market for news under two assumptions: that readers hold beliefs which they like to see confirmed, and that newspapers can slant stories toward these beliefs. We show that, on the topics where readers share common beliefs, one should not expect accuracy even from competitive media: competition results in lower prices, but common slanting toward reader biases. On topics where reader beliefs diverge (such as politically divisive issues), however, newspapers segment the market and slant toward extreme positions. Yet in the aggregate, a reader with access to all news sources could get an unbiased perspective. Generally speaking, reader heterogeneity is more important for accuracy in media than competition per se. (JEL D23, L82)",
"title": ""
},
{
"docid": "0f208f41314384a1c34d32224e790664",
"text": "BACKGROUND\nThe Rey 15-Item Memory Test (RMT) is frequently used to detect malingering. Many objections to the test have been raised. Nevertheless, the test is still widely used.\n\n\nOBJECTIVE\nTo provide a meta-analysis of the available studies using the RMT and provide an overall assessment of the sensitivity and specificity of the test, based on the cumulative data.\n\n\nRESULTS\nThe results show that, excluding patients with mental retardation, the RMT has a low sensitivity but an excellent specificity.\n\n\nCONCLUSIONS\nThese results provide the basis for the ongoing use of the test, given that it is acceptable to miss some cases of malingering with such a screening test, but one does not want to have many false positives.",
"title": ""
},
{
"docid": "2784de025936e2c9a9a0e86753281f8b",
"text": "Cardiovascular disease remains the leading cause of disease burden globally, which underlies the continuing need to identify new complementary targets for prevention. Over the past 5–10 years, the pooling of multiple data sets into 'mega-studies' has accelerated progress in research on stress as a risk and prognostic factor for cardiovascular disease. Severe stressful experiences in childhood, such as physical abuse and household substance abuse, can damage health and increase the risk of multiple chronic conditions in adulthood. Compared with childhood stress and adulthood classic risk factors, such as smoking, high blood pressure, and high serum cholesterol levels, the harmful effects of stress in adulthood are generally less marked. However, adulthood stress has an important role as a disease trigger in individuals who already have a high atherosclerotic plaque burden, and as a determinant of prognosis and outcome in those with pre-existing cardiovascular or cerebrovascular disease. In real-life settings, mechanistic studies have corroborated earlier laboratory-based observations on stress-related pathophysiological changes that underlie triggering, such as lowered arrhythmic threshold and increased sympathetic activation with related increases in blood pressure, as well as pro-inflammatory and procoagulant responses. In some clinical guidelines, stress is already acknowledged as a target for prevention for people at high overall risk of cardiovascular disease or with established cardiovascular disease. However, few scalable, evidence-based interventions are currently available.",
"title": ""
},
{
"docid": "dd057cd10948a7c894523c5f0b452965",
"text": "This paper presents an approach to learn meaningful spatial relationships in an unsupervised fashion from the distribution of 3D object poses in the real world. Our approach begins by extracting an over-complete set of features to describe the relative geometry of two objects. Each relationship type is modeled using a relevance-weighted distance over this feature space. This effectively ignores irrelevant feature dimensions. Our algorithm RANSEM for determining subsets of data that share a relationship as well as the model to describe each relationship is based on robust sample-based clustering. This approach combines the search for consistent groups of data with the extraction of models that precisely capture the geometry of those groups. An iterative refinement scheme has shown to be an effective approach for finding concepts of differing degrees of geometric specificity. Our results show that the models learned by our approach correlate strongly with the English labels that have been given by a human annotator to a set of validation data drawn from the NYUv2 real-world Kinect dataset, demonstrating that these concepts can be automatically acquired given sufficient experience. Additionally, the results of our method significantly out-perform K-means, a standard baseline for unsupervised cluster extraction.",
"title": ""
},
{
"docid": "ec2acfbe9020b9a136a14c2be7d517dd",
"text": "Cricket is a popular sport played by 16 countries, is the second most watched sport in the world after soccer, and enjoys a multi-million dollar industry. There is tremendous interest in simulating cricket and more importantly in predicting the outcome of games, particularly in their one-day international format. The complex rules governing the game, along with the numerous natural parameters affecting the outcome of a cricket match present significant challenges for accurate prediction. Multiple diverse parameters, including but not limited to cricketing skills and performances, match venues and even weather conditions can significantly affect the outcome of a game. The sheer number of parameters, along with their interdependence and variance create a non-trivial challenge to create an accurate quantitative model of a game Unlike other sports such as basketball and baseball which are well researched from a sports analytics perspective, for cricket, these tasks have yet to be investigated in depth. In this paper, we build a prediction system that takes in historical match data as well as the instantaneous state of a match, and predicts future match events culminating in a victory or loss. We model the game using a subset of match parameters, using a combination of linear regression and nearestneighbor clustering algorithms. We describe our model and algorithms and finally present quantitative results, demonstrating the performance of our algorithms in predicting the number of runs scored, one of the most important determinants of match outcome.",
"title": ""
},
{
"docid": "542117c3e27d15163b809a528952fb79",
"text": "Predicting the gap between taxi demand and supply in taxi booking apps is completely new and important but challenging. However, manually mining gap rule for different conditions may become impractical because of massive and sparse taxi data. Existing works unilaterally consider demand or supply, used only few simple features and verified by little data, but not predict the gap value. Meanwhile, none of them dealing with missing values. In this paper, we introduce a Double Ensemble Gradient Boosting Decision Tree Model(DEGBDT) to predict taxi gap. (1) Our approach specifically considers demand and supply to predict the gap between them. (2) Also, our method provides a greedy feature ranking and selecting method to exploit most reliable feature. (3) To deal with missing value, our model takes the lead in proposing a double ensemble method, which secondarily integrates different Gradient Boosting Decision Tree(GBDT) model at the different data sparse situation. Experiments on real large-scale dataset demonstrate that our approach can effectively predict the taxi gap than state-of-the-art methods, and shows that double ensemble method is efficacious for sparse data.",
"title": ""
},
{
"docid": "c9ae0fca2ddd718b905283741a93a254",
"text": "A unified power control strategy is proposed for the permanent magnet synchronous generator-based wind energy conversion system (WECS) operating under different grid conditions. In the strategy, the generator-side converter is used to control the dc-link voltage and the grid-side converter is responsible for the control of power flow injected into the grid. The generator-side controller has inherent damping capability of the torsional oscillations caused by drive-train characteristics. The grid-side control is utilized to satisfy the active and reactive current (power) requirements defined in the grid codes, and at the same time mitigates the current distortions even with unsymmetrical grid fault. During grid faults, the generator-side converter automatically reduces the generator current to maintain the dc voltage and the resultant generator acceleration is counteracted by pitch regulation. Compared with the conventional strategy, the dc chopper, which is intended to assist the fault ride through of the WECS, can be eliminated if the proposed scheme is employed. Compared with the variable-structured control scheme, the proposed strategy has quicker and more precise power responses, which is beneficial to the grid recovery. The simulation results verify the effectiveness of the proposed strategy.",
"title": ""
},
{
"docid": "bb2b3944f72c0d1a530f971ddf6dc6fb",
"text": "UNLABELLED\nAny suture material, absorbable or nonabsorbable, elicits a kind of inflammatory reaction within the tissue. Nonabsorbable black silk suture and absorbable polyglycolic acid suture were compared clinically and histologically on various parameters.\n\n\nMATERIALS AND METHODS\nThis study consisted of 50 patients requiring minor surgical procedure, who were referred to the Department of Oral and Maxillofacial Surgery. Patients were selected randomly and sutures were placed in the oral cavity 7 days preoperatively. Polyglycolic acid was placed on one side and black silk suture material on the other. Seven days later, prior to surgical procedure the sutures will be assessed. After the surgical procedure the sutures will be placed postoperatively in the same way for 7 days, after which the sutures will be assessed clinically and histologically.\n\n\nRESULTS\nThe results of this study showed that all the sutures were retained in case of polyglycolic acid suture whereas four cases were not retained in case of black silk suture. As far as polyglycolic acid suture is concerned 25 cases were mild, 18 cases moderate and seven cases were severe. Black silk showed 20 mild cases, 21 moderate cases and six severe cases. The histological results showed that 33 cases showed mild, 14 cases moderate and three cases severe in case of polyglycolic acid suture. Whereas in case of black silk suture 41 cases were mild. Seven cases were moderate and two cases were severe. Black silk showed milder response than polyglycolic acid suture histologically.\n\n\nCONCLUSION\nThe polyglycolic acid suture was more superior because in all 50 patients the suture was retained. It had less tissue reaction, better handling characteristics and knotting capacity.",
"title": ""
},
{
"docid": "a3b4e8b4a54921da210b42e43fc2e7ff",
"text": "CONTEXT\nRecent reports show that obesity and diabetes have increased in the United States in the past decade.\n\n\nOBJECTIVE\nTo estimate the prevalence of obesity, diabetes, and use of weight control strategies among US adults in 2000.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThe Behavioral Risk Factor Surveillance System, a random-digit telephone survey conducted in all states in 2000, with 184 450 adults aged 18 years or older.\n\n\nMAIN OUTCOME MEASURES\nBody mass index (BMI), calculated from self-reported weight and height; self-reported diabetes; prevalence of weight loss or maintenance attempts; and weight control strategies used.\n\n\nRESULTS\nIn 2000, the prevalence of obesity (BMI >/=30 kg/m(2)) was 19.8%, the prevalence of diabetes was 7.3%, and the prevalence of both combined was 2.9%. Mississippi had the highest rates of obesity (24.3%) and of diabetes (8.8%); Colorado had the lowest rate of obesity (13.8%); and Alaska had the lowest rate of diabetes (4.4%). Twenty-seven percent of US adults did not engage in any physical activity, and another 28.2% were not regularly active. Only 24.4% of US adults consumed fruits and vegetables 5 or more times daily. Among obese participants who had had a routine checkup during the past year, 42.8% had been advised by a health care professional to lose weight. Among participants trying to lose or maintain weight, 17.5% were following recommendations to eat fewer calories and increase physical activity to more than 150 min/wk.\n\n\nCONCLUSIONS\nThe prevalence of obesity and diabetes continues to increase among US adults. Interventions are needed to improve physical activity and diet in communities nationwide.",
"title": ""
},
{
"docid": "534554ae5913f192d32efd93256488d6",
"text": "Several unclassified web services are available in the internet which is difficult for the user to choose the correct web services. This raises service discovery cost, transforming data time between services and service searching time. Adequate methods, tools, technologies for clustering the web services have been developed. The clustering of web services is done manually. This survey is organized based on clustering of web service discovery methods, tools and technologies constructed on following list of parameters. The parameters are clustering model, graphs and environment, different technologies, advantages and disadvantages, theory and proof of concepts. Based on the user requirements results are different and better than one another. If the web service clustering is done automatically that can create an impact in the service discovery and fulfills the user requirements. This article gives the overview of the significant issues of the different methods and discusses the lack of technologies and automatic tools of the web service discovery.",
"title": ""
},
{
"docid": "ed1a3ca3e558eeb33e2841fa4b9c28d2",
"text": "© 2010 ETRI Journal, Volume 32, Number 4, August 2010 In this paper, we present a low-voltage low-dropout voltage regulator (LDO) for a system-on-chip (SoC) application which, exploiting the multiplication of the Miller effect through the use of a current amplifier, is frequency compensated up to 1-nF capacitive load. The topology and the strategy adopted to design the LDO and the related compensation frequency network are described in detail. The LDO works with a supply voltage as low as 1.2 V and provides a maximum load current of 50 mA with a drop-out voltage of 200 mV: the total integrated compensation capacitance is about 40 pF. Measurement results as well as comparison with other SoC LDOs demonstrate the advantage of the proposed topology.",
"title": ""
},
{
"docid": "b908987c5bae597683f177beb2bba896",
"text": "This paper presents a novel task of cross-language authorship attribution (CLAA), an extension of authorship attribution task to multilingual settings: given data labelled with authors in language X , the objective is to determine the author of a document written in language Y , where X 6= Y . We propose a number of cross-language stylometric features for the task of CLAA, such as those based on sentiment and emotional markers. We also explore an approach based on machine translation (MT) with both lexical and cross-language features. We experimentally show that MT could be used as a starting point to CLAA, since it allows good attribution accuracy to be achieved. The cross-language features provide acceptable accuracy while using jointly with MT, though do not outperform lexical",
"title": ""
}
] |
scidocsrr
|
0b4c67f00e1c7b55abfc05e06205b37a
|
Universal Transformers
|
[
{
"docid": "2a6aa350dd7ddc663aaaafe4d745845e",
"text": "Neural networks augmented with external memory have the ability to learn algorithmic solutions to complex tasks. These models appear promising for applications such as language modeling and machine translation. However, they scale poorly in both space and time as the amount of memory grows — limiting their applicability to real-world domains. Here, we present an end-to-end differentiable memory access scheme, which we call Sparse Access Memory (SAM), that retains the representational power of the original approaches whilst training efficiently with very large memories. We show that SAM achieves asymptotic lower bounds in space and time complexity, and find that an implementation runs 1,000⇥ faster and with 3,000⇥ less physical memory than non-sparse models. SAM learns with comparable data efficiency to existing models on a range of synthetic tasks and one-shot Omniglot character recognition, and can scale to tasks requiring 100,000s of time steps and memories. As well, we show how our approach can be adapted for models that maintain temporal associations between memories, as with the recently introduced Differentiable Neural Computer.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "87a11f6097cb853b7c98e17cdf97801e",
"text": "Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we compare the two architectures—recurrent versus non-recurrent—with respect to their ability to model hierarchical structure and find that recurrency is indeed important for this purpose. The code and data used in our experiments is available at https://github.com/",
"title": ""
},
{
"docid": "98be2f8b10c618f9d2fc8183f289c739",
"text": "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
"title": ""
}
] |
[
{
"docid": "9242d2e212cc20a6e921228bf090c130",
"text": "This paper includes two contributions. First, it proves that the series and shunt radiation components, corresponding to longitudinal and transversal electric fields, respectively, are always in phase quadrature in axially asymmetric periodic leaky-wave antennas (LWAs), so that these antennas are inherently elliptically polarized. This fact is theoretically proven and experimentally illustrated by two case-study examples, a composite right/left-handed (CRLH) LWA and a series-fed patch (SFP) LWA. Second, it shows (for the case of the SFP LWA) that the axial ratio is controlled and minimized by the degree of axial asymmetry.",
"title": ""
},
{
"docid": "afaed9813ab63d0f5a23648a1e0efadb",
"text": "We proposed novel airway segmentation methods in volumetric chest computed tomography (CT) using 2.5D convolutional neural net (CNN) and 3D CNN. A method with 2.5D CNN segments airways by voxel-by-voxel classification based on patches which are from three adjacent slices in each of the orthogonal directions including axial, sagittal, and coronal slices around each voxel, while 3D CNN segments by 3D patch-based semantic segmentation using modified 3D U-Net. The extra-validation of our proposed method was demonstrated in 20 test datasets of the EXACT’09 challenge. The detected tree length and the false positive rate was 60.1%, 4.56% for 2.5D CNN and 61.6%, 3.15% for 3D CNN. Our fully automated (end-to-end) segmentation method could be applied in radiological practice.",
"title": ""
},
{
"docid": "a066ff1b4dfa65a67b79200366021542",
"text": "OBJECTIVES\nWe sought to assess the shave biopsy technique, which is a new surgical procedure for complete removal of longitudinal melanonychia. We evaluated the quality of the specimen submitted for pathological examination, assessed the postoperative outcome, and ascertained its indication between the other types of matrix biopsies.\n\n\nDESIGN\nThis was a retrospective study performed at the dermatologic departments of the Universities of Liège and Brussels, Belgium, of 30 patients with longitudinal or total melanonychia.\n\n\nRESULTS\nPathological diagnosis was made in all cases; 23 patients were followed up during a period of 6 to 40 months. Seventeen patients had no postoperative nail plate dystrophy (74%) but 16 patients had recurrence of pigmentation (70%).\n\n\nLIMITATIONS\nThis was a retrospective study.\n\n\nCONCLUSIONS\nShave biopsy is an effective technique for dealing with nail matrix lesions that cause longitudinal melanonychia over 4 mm wide. Recurrence of pigmentation is the main drawback of the procedure.",
"title": ""
},
{
"docid": "4a518f4cdb34f7cff1d75975b207afe4",
"text": "In this paper, the design and measurement results of a highly efficient 1-Watt broadband class J SiGe power amplifier (PA) at 700 MHz are reported. Comparisons between a class J PA and a traditional class AB/B PA have been made, first through theoretical analysis in terms of load network, efficiency and bandwidth behavior, and secondly by bench measurement data. A single-ended power cell is designed and fabricated in the 0.35 μm IBM 5PAe SiGe BiCMOS technology with through-wafer-vias (TWVs). Watt-level output power with greater than 50% efficiency is achieved on bench across a wide bandwidth of 500 MHz to 900 MHz for the class J PA (i.e., >;57% bandwidth at the center frequency of 700 MHz). Psat of 30.9 dBm with 62% collector efficiency (CE) at 700 MHz is measured while the highest efficiency of 68.9% occurs at 650 MHz using a 4.2 V supply. Load network of this class J PA is realized with lumped passive components on a FR4 printed circuit board (PCB). A narrow-band class AB PA counterpart is also designed and fabricated for comparison. The data suggests that the broadband class J SiGe PA can be promising for future multi-band wireless applications.",
"title": ""
},
{
"docid": "892661d87138d49aab2a54b7557a7021",
"text": "Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the CaltechUCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.",
"title": ""
},
{
"docid": "a45b4d0237fdcfedf973ec639b1a1a36",
"text": "We investigated the brain systems engaged during propositional speech (PrSp) and two forms of non- propositional speech (NPrSp): counting and reciting overlearned nursery rhymes. Bilateral cerebral and cerebellar regions were involved in the motor act of articulation, irrespective of the type of speech. Three additional, left-lateralized regions, adjacent to the Sylvian sulcus, were activated in common: the most posterior part of the supratemporal plane, the lateral part of the pars opercularis in the posterior inferior frontal gyrus and the anterior insula. Therefore, both NPrSp and PrSp were dependent on the same discrete subregions of the anatomically ill-defined areas of Wernicke and Broca. PrSp was also dependent on a predominantly left-lateralized neural system distributed between multi-modal and amodal regions in posterior inferior parietal, anterolateral and medial temporal and medial prefrontal cortex. The lateral prefrontal and paracingulate cortical activity observed in previous studies of cued word retrieval was not seen with either NPrSp or PrSp, demonstrating that normal brain- language representations cannot be inferred from explicit metalinguistic tasks. The evidence from this study indicates that normal communicative speech is dependent on a number of left hemisphere regions remote from the classic language areas of Wernicke and Broca. Destruction or disconnection of discrete left extrasylvian and perisylvian cortical regions, rather than the total extent of damage to perisylvian cortex, will account for the qualitative and quantitative differences in the impaired speech production observed in aphasic stroke patients.",
"title": ""
},
{
"docid": "f3b4a9b49a34d56c32589cee14e6b900",
"text": "The paper reports on mobile robot motion estimation based on matching points from successive two-dimensional (2D) laser scans. This ego-motion approach is well suited to unstructured and dynamic environments because it directly uses raw laser points rather than extracted features. We have analyzed the application of two methods that are very different in essence: (i) A 2D version of iterative closest point (ICP), which is widely used for surface registration; (ii) a genetic algorithm (GA), which is a novel approach for this kind of problem. Their performance in terms of real-time applicability and accuracy has been compared in outdoor experiments with nonstop motion under diverse realistic navigation conditions. Based on this analysis, we propose a hybrid GA-ICP algorithm that combines the best characteristics of these pure methods. The experiments have been carried out with the tracked mobile robot Auriga-alpha and an on-board 2D laser scanner. _____________________________________________________________________________________ This document is a PREPRINT. The published version of the article is available in: Journal of Field Robotics, 23: 21–34. doi: 10.1002/rob.20104; http://dx.doi.org/10.1002/rob.20104.",
"title": ""
},
{
"docid": "b0087e2afdf5a1abc5046782279529a5",
"text": "The rapid development of Community Question Answering (CQA) satisfies users’ quest for professional and personal knowledge about anything. In CQA, one central issue is to find users with expertise and willingness to answer the given questions. Expert finding in CQA often exhibits very different challenges compared to traditional methods. The new features of CQA (such as huge volume, sparse data and crowdsourcing) violate fundamental assumptions of traditional recommendation systems. This paper focuses on reviewing and categorizing the current progress on expert finding in CQA. We classify the recent solutions into four different categories: matrix factorization based models (MF-based models), gradient boosting tree based models (GBT-based models), deep learning based models (DL-based models) and ranking based models (R-based models). We find that MF-based models outperform other categories of models in the crowdsourcing situation. Moreover, we use innovative diagrams to clarify several important concepts of ensemble learning, and find that ensemble models with several specific single models can further boost the performance. Further, we compare the performance of different models on different types of matching tasks, including text vs. text, graph vs. text, audio vs. text and video vs. text. The results will help the model selection of expert finding in practice. Finally, we explore some potential future issues in expert finding research in CQA.",
"title": ""
},
{
"docid": "3133829dd980cc1b428d80890cded347",
"text": "Finger vein images are rich in orientation and edge features. Inspired by the edge histogram descriptor proposed in MPEG-7, this paper presents an efficient orientation-based local descriptor, named histogram of salient edge orientation map (HSEOM). HSEOM is based on the fact that human vision is sensitive to edge features for image perception. For a given image, HSEOM first finds oriented edge maps according to predefined orientations using a well-known edge operator and obtains a salient edge orientation map by choosing an orientation with the maximum edge magnitude for each pixel. Then, subhistograms of the salient edge orientation map are generated from the nonoverlapping submaps and concatenated to build the final HSEOM. In the experiment of this paper, eight oriented edge maps were used to generate a salient edge orientation map for HSEOM construction. Experimental results on our available finger vein image database, MMCBNU_6000, show that the performance of HSEOM outperforms that of state-of-the-art orientation-based methods (e.g., Gabor filter, histogram of oriented gradients, and local directional code). Furthermore, the proposed HSEOM has advantages of low feature dimensionality and fast implementation for a real-time finger vein recognition system.",
"title": ""
},
{
"docid": "5e1f035df9a6f943c5632078831f5040",
"text": "Animacy is a necessary property for a referent to be an agent, and thus animacy detection is useful for a variety of natural language processing tasks, including word sense disambiguation, co-reference resolution, semantic role labeling, and others. Prior work treated animacy as a word-level property, and has developed statistical classifiers to classify words as either animate or inanimate. We discuss why this approach to the problem is ill-posed, and present a new approach based on classifying the animacy of co-reference chains. We show that simple voting approaches to inferring the animacy of a chain from its constituent words perform relatively poorly, and then present a hybrid system merging supervised machine learning (ML) and a small number of handbuilt rules to compute the animacy of referring expressions and co-reference chains. This method achieves state of the art performance. The supervised ML component leverages features such as word embeddings over referring expressions, parts of speech, and grammatical and semantic roles. The rules take into consideration parts of speech and the hypernymy structure encoded in WordNet. The system achieves an F1 of 0.88 for classifying the animacy of referring expressions, which is comparable to state of the art results for classifying the animacy of words, and achieves an F1 of 0.75 for classifying the animacy of coreference chains themselves. We release our training and test dataset, which includes 142 texts (all narratives) comprising 156,154 words, 34,698 referring expressions, and 10,941 co-reference chains. We test the method on a subset of the OntoNotes dataset, showing using manual sampling that animacy classification is 90%±2% accurate for coreference chains, and 92%±1% for referring expressions. The data also contains 46 folktales, which present an interesting challenge because they often involve characters who are members of traditionally inanimate classes (e.g., stoves that walk, trees that talk). We show that our system is able to detect the animacy of these unusual referents with an F1 of 0.95.",
"title": ""
},
{
"docid": "ddaa9d109273684f694c698f5261db9e",
"text": "Multiprocessor architectures and platforms have been introduced to extend the applicability of Moore’s law. They depend on concurrency and synchronization in both software and hardware to enhance the design productivity and system performance. These platforms will also have to incorporate highly scalable, reusable, predictable, costand energy-efficient architectures. With the rapidly approaching billion transistors era, some of the main problems in deep sub-micron technologies which are characterized by gate lengths in the range of 60-90 nm, will arise from non-scalable wire delays, errors in signal integrity and unsynchronized communications. These problems may be overcome by the use of Network on Chip (NOC) architecture. In this paper, we have summarized over sixty research papers and contributions in NOC area.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "9584d194e05359ef5123c6b3d71e1c75",
"text": "A bloom filter is a randomized data structure for performing approximate membership queries. It is being increasingly used in networking applications ranging from security to routing in peer to peer networks. In order to meet a given false positive rate, the amount of memory required by a bloom filter is a function of the number of elements in the set. We consider the problem of minimizing the memory requirements in cases where the number of elements in the set is not known in advance but the distribution or moment information of the number of elements is known. We show how to exploit such information to minimize the expected amount of memory required for the filter. We also show how this approach can significantly reduce memory requirement when bloom filters are constructed for multiple sets in parallel. We show analytically as well as experiments on synthetic and trace data that our approach leads to one to three orders of magnitude reduction in memory compared to a standard bloom filter.",
"title": ""
},
{
"docid": "e9326cb2e3b79a71d9e99105f0259c5a",
"text": "Although drugs are intended to be selective, at least some bind to several physiological targets, explaining side effects and efficacy. Because many drug–target combinations exist, it would be useful to explore possible interactions computationally. Here we compared 3,665 US Food and Drug Administration (FDA)-approved and investigational drugs against hundreds of targets, defining each target by its ligands. Chemical similarities between drugs and ligand sets predicted thousands of unanticipated associations. Thirty were tested experimentally, including the antagonism of the β1 receptor by the transporter inhibitor Prozac, the inhibition of the 5-hydroxytryptamine (5-HT) transporter by the ion channel drug Vadilex, and antagonism of the histamine H4 receptor by the enzyme inhibitor Rescriptor. Overall, 23 new drug–target associations were confirmed, five of which were potent (<100 nM). The physiological relevance of one, the drug N,N-dimethyltryptamine (DMT) on serotonergic receptors, was confirmed in a knockout mouse. The chemical similarity approach is systematic and comprehensive, and may suggest side-effects and new indications for many drugs.",
"title": ""
},
{
"docid": "93ea7c59bad8181b0379f39e00f4d2e8",
"text": "Breadth-First Search (BFS) is a key graph algorithm with many important applications. In this work, we focus on a special class of graph traversal algorithm - concurrent BFS - where multiple breadth-first traversals are performed simultaneously on the same graph. We have designed and developed a new approach called iBFS that is able to run i concurrent BFSes from i distinct source vertices, very efficiently on Graphics Processing Units (GPUs). iBFS consists of three novel designs. First, iBFS develops a single GPU kernel for joint traversal of concurrent BFS to take advantage of shared frontiers across different instances. Second, outdegree-based GroupBy rules enables iBFS to selectively run a group of BFS instances which further maximizes the frontier sharing within such a group. Third, iBFS brings additional performance benefit by utilizing highly optimized bitwise operations on GPUs, which allows a single GPU thread to inspect a vertex for concurrent BFS instances. The evaluation on a wide spectrum of graph benchmarks shows that iBFS on one GPU runs up to 30x faster than executing BFS instances sequentially, and on 112 GPUs achieves near linear speedup with the maximum performance of 57,267 billion traversed edges per second (TEPS).",
"title": ""
},
{
"docid": "7f848facaa535d53e7a6fe7aa2435473",
"text": "The data structure used to represent image information can be critical to the successful completion of an image processing task. One structure that has attracted considerable attention is the image pyramid This consists of a set of lowpass or bandpass copies of an image, each representing pattern information of a different scale. Here we describe a variety of pyramid methods that we have developed for image data compression, enhancement, analysis and graphics. ©1984 RCA Corporation Final manuscript received November 12, 1984 Reprint Re-29-6-5 that can perform most of the routine visual tasks that humans do effortlessly. It is becoming increasingly clear that the format used to represent image data can be as critical in image processing as the algorithms applied to the data. A digital image is initially encoded as an array of pixel intensities, but this raw format is not suited to most asks. Alternatively, an image may be represented by its Fourier transform, with operations applied to the transform coefficients rather than to the original pixel values. This is appropriate for some data compression and image enhancement tasks, but inappropriate for others. The transform representation is particularly unsuited for machine vision and computer graphics, where the spatial location of pattem elements is critical. Recently there has been a great deal of interest in representations that retain spatial localization as well as localization in the spatial—frequency domain. This is achieved by decomposing the image into a set of spatial frequency bandpass component images. Individual samples of a component image represent image pattern information that is appropriately localized, while the bandpassed image as a whole represents information about a particular fineness of detail or scale. There is evidence that the human visual system uses such a representation, 1 and multiresolution schemes are becoming increasingly popular in machine vision and in image processing in general. The importance of analyzing images at many scales arises from the nature of images themselves. Scenes in the world contain objects of many sizes, and these objects contain features of many sizes. Moreover, objects can be at various distances from the viewer. As a result, any analysis procedure that is applied only at a single scale may miss information at other scales. The solution is to carry out analyses at all scales simultaneously. Convolution is the basic operation of most image analysis systems, and convolution with large weighting functions is a notoriously expensive computation. In a multiresolution system one wishes to perform convolutions with kernels of many sizes, ranging from very small to very large. and the computational problems appear forbidding. Therefore one of the main problems in working with multiresolution representations is to develop fast and efficient techniques. Members of the Advanced Image Processing Research Group have been actively involved in the development of multiresolution techniques for some time. Most of the work revolves around a representation known as a \"pyramid,\" which is versatile, convenient, and efficient to use. We have applied pyramid-based methods to some fundamental problems in image analysis, data compression, and image manipulation.",
"title": ""
},
{
"docid": "0b191398f6458d8516ff65c74550bd68",
"text": "It is now recognized that gut microbiota contributes indispensable roles in safeguarding host health. Shrimp is being threatened by newly emerging diseases globally; thus, understanding the driving factors that govern its gut microbiota would facilitate an initial step to reestablish and maintain a “healthy” gut microbiota. This review summarizes the factors that assemble the shrimp gut microbiota, which focuses on the current progresses of knowledge linking the gut microbiota and shrimp health status. In particular, I propose the exploration of shrimp disease pathogenesis and incidence based on the interplay between dysbiosis in the gut microbiota and disease severity. An updated research on shrimp disease toward an ecological perspective is discussed, including host–bacterial colonization, identification of polymicrobial pathogens and diagnosing disease incidence. Further, a simple conceptual model is offered to summarize the interplay among the gut microbiota, external factors, and shrimp disease. Finally, based on the review, current limitations are raised and future studies directed at solving these concerns are proposed. This review is timely given the increased interest in the role of gut microbiota in disease pathogenesis and the advent of novel diagnosis strategies.",
"title": ""
},
{
"docid": "199079ff97d1a48819f8185c2ef23472",
"text": "Identifying domain-dependent opinion words is a key problem in opinion mining and has been studied by several researchers. However, existing work has been focused on adjectives and to some extent verbs. Limited work has been done on nouns and noun phrases. In our work, we used the feature-based opinion mining model, and we found that in some domains nouns and noun phrases that indicate product features may also imply opinions. In many such cases, these nouns are not subjective but objective. Their involved sentences are also objective sentences and imply positive or negative opinions. Identifying such nouns and noun phrases and their polarities is very challenging but critical for effective opinion mining in these domains. To the best of our knowledge, this problem has not been studied in the literature. This paper proposes a method to deal with the problem. Experimental results based on real-life datasets show promising results.",
"title": ""
},
{
"docid": "8e03f4410676fb4285596960880263e9",
"text": "Fuzzy computing (FC) has made a great impact in capturing human domain knowledge and modeling non-linear mapping of input-output space. In this paper, we describe the design and implementation of FC systems for detection of money laundering behaviors in financial transactions and monitoring of distributed storage system load. Our objective is to demonstrate the power of FC for real-world applications which are characterized by imprecise, uncertain data, and incomplete domain knowledge. For both applications, we designed fuzzy rules based on experts’ domain knowledge, depending on money laundering scenarios in transactions or the “health” of a distributed storage system. In addition, we developped a generic fuzzy inference engine and contributed to the open source community.",
"title": ""
}
] |
scidocsrr
|
699e4d6a48d344f147b879cce3717c4a
|
Neural Network Classification Algorithm with M- Learning Reviews to Improve the Classification Accuracy
|
[
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] |
[
{
"docid": "b886fbb9b40e6d634f59288bb60960a7",
"text": "Antithrombotic therapy has recently become more frequent for the treatment of venous thromboembolism (VTE) in the paediatric population. This can be explained by the increased awareness of morbidities and mortalities of VTE in children, as well as the improved survival rate of children with various kinds of serious illnesses. Considering the large number of years a child is expected to survive, associated morbidities such as postthrombotic syndrome and risk of recurrence can significantly impact on the quality of life in children. Therefore, timely diagnosis, evidence-based treatment and prophylaxis strategies are critical to avoid such complications. This review summarizes the current literature about the antithrombotic treatment for VTE in infants and children. It guides the paediatric medical care provider for making a logical and justifiable decision.",
"title": ""
},
{
"docid": "5aebd19c78b6b24c612e20970c27044f",
"text": "The concept of alignment or fit between information technology (IT) and business strategy has been discussed for many years, and strategic alignment is deemed crucial in increasing firm performance. Yet few attempts have been made to investigate the factors that influence alignment, especially in the context of small and medium sized firms (SMEs). This issue is important because results from previous studies suggest that many firms struggle to achieve alignment. Therefore, this study sought to identify different levels of alignment and then investigated the factors that influence alignment. In particular, it focused on the alignment between the requirements for accounting information (AIS requirements) and the capacity of accounting systems (AIS capacity) to generate the information, in the specific context of manufacturing SMEs in Malaysia. Using a mail questionnaire, data from 214 firms was collected on nineteen accounting information characteristics for both requirements and capacity. The fit between these two sets was explored using the moderation approach and evidence was gained that AIS alignment in some firms was high. Cluster analysis was used to find two sets of groups which could be considered more aligned and less aligned. The study then investigated some factors that might be associated with a small firm’s level of AIS alignment. Findings from the study suggest that AIS alignment was related to the firm’s: level of IT maturity; level of owner/manager’s accounting and IT knowledge; use of expertise from government agencies and accounting firms; and existence of internal IT staff.",
"title": ""
},
{
"docid": "02d8c55750904b7f4794139bcfa51693",
"text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.",
"title": ""
},
{
"docid": "038d9cae9836fcb661d3ab34dd1b0450",
"text": "Cross-lingual information extraction is the task of distilling facts from foreign language (e.g. Chinese text) into representations in another language that is preferred by the user (e.g. English tuples). Conventional pipeline solutions decompose the task as machine translation followed by information extraction (or vice versa). We propose a joint solution with a neural sequence model, and show that it outperforms the pipeline in a cross-lingual open information extraction setting by 1-4 BLEU and 0.5-0.8 F1.",
"title": ""
},
{
"docid": "709a6b1a5c49bf0e41a24ed5a6b392c9",
"text": "Th e paper presents a literature review of the main concepts of hotel revenue management (RM) and current state-of-the-art of its theoretical research. Th e article emphasises on the diff erent directions of hotel RM research and is structured around the elements of the hotel RM system and the stages of RM process. Th e elements of the hotel RM system discussed in the paper include hotel RM centres (room division, F&B, function rooms, spa & fi tness facilities, golf courses, casino and gambling facilities, and other additional services), data and information, the pricing (price discrimination, dynamic pricing, lowest price guarantee) and non-pricing (overbookings, length of stay control, room availability guarantee) RM tools, the RM software, and the RM team. Th e stages of RM process have been identifi ed as goal setting, collection of data and information, data analysis, forecasting, decision making, implementation and monitoring. Additionally, special attention is paid to ethical considerations in RM practice, the connections between RM and customer relationship management, and the legal aspect of RM. Finally, the article outlines future research perspectives and discloses potential evolution of RM in future.",
"title": ""
},
{
"docid": "388dd2641da56a83984794871e1e230b",
"text": "Mobile advertisement (ad for short) is a major financial pillar for developers to provide free mobile apps. However, it is frequently thwarted by ad fraud, where rogue code tricks ad providers by forging ad display or user clicks, or both. With the mobile ad market growing drastically (e.g., from $8.76 billion in 2012 to $17.96 billion in 2013), it is vitally important to provide a verifiable mobile ad framework to detect and prevent ad frauds. Unfortunately, this is notoriously hard as mobile ads usually run in an execution environment with a huge TCB.\n This paper proposes a verifiable mobile ad framework called AdAttester, based on ARM?s TrustZone technology. AdAttester provides two novel security primitives, namely unforgeable clicks and verifiable display. The two primitives attest that ad-related operations (e.g., user clicks) are initiated by the end user (instead of a bot) and that the ad is displayed intact and timely. AdAttester leverages the secure world of TrustZone to implement these two primitives to collect proofs, which are piggybacked on ad requests to ad providers for attestation. AdAttester is non-intrusive to mobile users and can be incrementally deployed in existing ad ecosystem. A prototype of AdAttester is implemented for Android running on a Samsung Exynos 4412 board. Evaluation using 182 typical mobile apps with ad frauds shows that AdAttester can accurately distinguish ad fraud from legitimate ad operations, yet incurs small performance overhead and little impact on user experience.",
"title": ""
},
{
"docid": "7292ceb6718d0892a154d294f6434415",
"text": "This article illustrates the application of a nonlinear system identification technique to the problem of STLF. Five NARX models are estimated using fixed-size LS-SVM, and two of the models are later modified into AR-NARX structures following the exploration of the residuals. The forecasting performance, assessed for different load series, is satisfactory. The MSE levels on the test data are below 3% in most cases. The models estimated with fixed-size LS-SVM give better results than a linear model estimated with the same variables and also better than a standard LS-SVM in dual space estimated using only the last 1000 data points. Furthermore, the good performance of the fixed-size LS-SVM is obtained based on a subset of M = 1000 initial support vectors, representing a small fraction of the available sample. Further research on a more dedicated definition of the initial input variables (for example, incorporation of external variables to reflect industrial activity, use of explicit seasonal information) might lead to further improvements and the extension toward other types of load series.",
"title": ""
},
{
"docid": "3bf37b20679ca6abd022571e3356e95d",
"text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.",
"title": ""
},
{
"docid": "e1b050e8dc79f363c4a2b956f384c8d5",
"text": "Keyphrase extraction is a fundamental technique in natural language processing. It enables documents to be mapped to a concise set of phrases that can be used for indexing, clustering, ontology building, auto-tagging and other information organization schemes. Two major families of unsupervised keyphrase extraction algorithms may be characterized as statistical and graph-based. We present a hybrid statistical-graphical algorithm that capitalizes on the heuristics of both families of algorithms and is able to outperform the state of the art in unsupervised keyphrase extraction on several datasets.",
"title": ""
},
{
"docid": "c60d916201756ceb9410d9262b6c9265",
"text": "A critical aspect of human cognition is the ability to effectively query the environment for information. The ‘real’ world is large and noisy, and therefore designing effective queries involves prioritizing both scope – the range of hypotheses addressed by the query – and reliability – the likelihood of obtaining a correct answer. Here we designed a simple information-search game in which participants had to select an informative query from a large set of queries, trading off scope and reliability. We find that adults are effective information-searchers even in large, noisy environments, and that their information search is best explained by a model that balances scope and reliability by selecting queries proportionately to their expected information gain.",
"title": ""
},
{
"docid": "3013a8b320cbbfc1ac8fed7c06d6996f",
"text": "Security and privacy are among the most pressing concerns that have evolved with the Internet. As networks expanded and became more open, security practices shifted to ensure protection of the ever growing Internet, its users, and data. Today, the Internet of Things (IoT) is emerging as a new type of network that connects everything to everyone, everywhere. Consequently, the margin of tolerance for security and privacy becomes narrower because a breach may lead to large-scale irreversible damage. One feature that helps alleviate the security concerns is authentication. While different authentication schemes are used in vertical network silos, a common identity and authentication scheme is needed to address the heterogeneity in IoT and to integrate the different protocols present in IoT. We propose in this paper an identity-based authentication scheme for heterogeneous IoT. The correctness of the proposed scheme is tested with the AVISPA tool and results showed that our scheme is immune to masquerade, man-in-the-middle, and replay attacks.",
"title": ""
},
{
"docid": "5a71d766ecd60b8973b965e53ef8ddfd",
"text": "An m-polar fuzzy model is useful for multi-polar information, multi-agent, multi-attribute and multiobject network models which gives more precision, flexibility, and comparability to the system as compared to the classical, fuzzy and bipolar fuzzy models. In this paper, m-polar fuzzy sets are used to introduce the notion of m-polar psi-morphism on product m-polar fuzzy graph (mFG). The action of this morphism is studied and established some results on weak and co-weak isomorphism. d2-degree and total d2-degree of a vertex in product mFG are defined and studied their properties. A real life situation has been modeled as an application of product mFG. c ©2018 World Academic Press, UK. All rights reserved.",
"title": ""
},
{
"docid": "b591b75b4653c01e3525a0889e7d9b90",
"text": "The concept of isogeometric analysis is proposed. Basis functions generated from NURBS (Non-Uniform Rational B-Splines) are employed to construct an exact geometric model. For purposes of analysis, the basis is refined and/or its order elevated without changing the geometry or its parameterization. Analogues of finite element hand p-refinement schemes are presented and a new, more efficient, higher-order concept, k-refinement, is introduced. Refinements are easily implemented and exact geometry is maintained at all levels without the necessity of subsequent communication with a CAD (Computer Aided Design) description. In the context of structural mechanics, it is established that the basis functions are complete with respect to affine transformations, meaning that all rigid body motions and constant strain states are exactly represented. Standard patch tests are likewise satisfied. Numerical examples exhibit optimal rates of convergence for linear elasticity problems and convergence to thin elastic shell solutions. A k-refinement strategy is shown to converge toward monotone solutions for advection–diffusion processes with sharp internal and boundary layers, a very surprising result. It is argued that isogeometric analysis is a viable alternative to standard, polynomial-based, finite element analysis and possesses several advantages. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fe715d2094119291f5c13fb0d08cace5",
"text": "The Echinacea species are native to the Atlantic drainage area of the United States of America and Canada. They have been introduced as cultivated medicinal plants in Europe. Echinacea purpurea, E. angustifolia and E. pallida are the species most often used medicinally due to their immune-stimulating properties. This review is focused on morphological and anatomical characteristics of E. purpurea, E. angustifolia, E. pallida, because various species are often misidentified and specimens are often confused in the medicinal plant market.",
"title": ""
},
{
"docid": "86ba97e91a8c2bcb1015c25df7c782db",
"text": "After a knee joint surgery, due to severe pain and immobility of the patient, the tissue around the knee become harder and knee stiffness will occur, which may causes many problems such as scar tissue swelling, bleeding, and fibrosis. A CPM (Continuous Passive Motion) machine is an apparatus that is being used to patient recovery, retrieving moving abilities of the knee, and reducing tissue swelling, after the knee joint surgery. This device prevents frozen joint syndrome (adhesive capsulitis), joint stiffness, and articular cartilage destruction by stimulating joint tissues, and flowing synovial fluid and blood around the knee joint. In this study, a new, light, and portable CPM machine with an appropriate interface, is designed and manufactured. The knee joint can be rotated from the range of -15° to 120° with a pace of 0.1 degree/sec to 1 degree/sec by this machine. One of the most important advantages of this new machine is its own user-friendly interface. This apparatus is controlled via an Android-based application; therefore, the users can use this machine easily via their own smartphones without the necessity to an extra controlling device. Besides, because of its apt size, this machine is a portable device. Smooth movement without any vibration and adjusting capability for different anatomies are other merits of this new CPM machine.",
"title": ""
},
{
"docid": "e7f25a389a4eda33442c2a0ad8d0bc16",
"text": "Computer systems are commonly attacked by malicious transport contacts. We present a comparative study that analyzes to what extent those attacks depend on the network access, in particular if an adversary targets specifically on mobile or non-mobile devices. Based on a mobile honeypot that extracts first statistical results, our findings indicate that a few topological domains of the Internet have started to place particular focus on attacking mobile networks.",
"title": ""
},
{
"docid": "517916f4c62bc7b5766efa537359349d",
"text": "Document-level sentiment classification aims to predict user’s overall sentiment in a document about a product. However, most of existing methods only focus on local text information and ignore the global user preference and product characteristics. Even though some works take such information into account, they usually suffer from high model complexity and only consider wordlevel preference rather than semantic levels. To address this issue, we propose a hierarchical neural network to incorporate global user and product information into sentiment classification. Our model first builds a hierarchical LSTM model to generate sentence and document representations. Afterwards, user and product information is considered via attentions over different semantic levels due to its ability of capturing crucial semantic components. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-theart methods. The source code of this paper can be obtained from https://github. com/thunlp/NSC.",
"title": ""
},
{
"docid": "c87a8ee5e968d2039b29f080f773af75",
"text": "The Gartner's 2014 Hype Cycle released last August moves Big Data technology from the Peak of Inflated Expectations to the beginning of the Trough of Disillusionment when interest starts to wane as reality does not live up to previous promises. As the hype is starting to dissipate it is worth asking what Big Data (however defined) means from a scientific perspective: Did the emergence of gigantic corpora exposed the limits of classical information retrieval and data mining and led to new concepts and challenges, the way say, the study of electromagnetism showed the limits of Newtonian mechanics and led to Relativity Theory, or is it all just \"sound and fury, signifying nothing\", simply a matter of scaling up well understood technologies? To answer this question, we have assembled a distinguished panel of eminent scientists, from both Industry and Academia: Lada Adamic (Facebook), Michael Franklin (University of California at Berkeley), Maarten de Rijke (University of Amsterdam), Eric Xing (Carnegie Mellon University), and Kai Yu (Baidu) will share their point of view and take questions from the moderator and the audience.",
"title": ""
},
{
"docid": "c784bfbd522bb4c9908c3f90a31199fe",
"text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.",
"title": ""
},
{
"docid": "510d755b31fc5ec908d5325b40f30078",
"text": "This study tested a model of the development of incongruity-resolution and nonsense humor during adulthood. Subjects were 4,292 14- to 66-year-old Germans. Twenty jokes and cartoons representing structure-based humor categories of incongruity resolution and nonsense were rated for funniness and aversiveness. Humor structure preferences were also assessed with a direct comparison task. The results generally confirmed the hypotheses. Incongruity-resolution humor increased in funniness and nonsense humor decreased in funniness among progressively older subjects after the late teens. Aversiveness of both forms of humor generally decreased over the ages sampled. Age differences in humor appreciation were strongly correlated with age differences in conservatism. An especially strong parallel was found between age differences in appreciation of incongruity-resolution humor and age differences in conservatism.",
"title": ""
}
] |
scidocsrr
|
5004442e422d51a134d3efc6492c3189
|
Security in Automotive Networks: Lightweight Authentication and Authorization
|
[
{
"docid": "3f8e4ddfe56737508ec2222d110291fc",
"text": "We present a new verification algorithm for security protocols that allows for unbounded verification, falsification, and complete characterization. The algorithm provides a number of novel features, including: (1) Guaranteed termination, after which the result is either unbounded correctness, falsification, or bounded correctness. (2) Efficient generation of a finite representation of an infinite set of traces in terms of patterns, also known as a complete characterization. (3) State-of-the-art performance, which has made new types of protocol analysis feasible, such as multi-protocol analysis.",
"title": ""
}
] |
[
{
"docid": "e28feb56ebc33a54d13452a2ea3a49f7",
"text": "Ping Yan, Hsinchun Chen, and Daniel Zeng Department of Management Information Systems University of Arizona, Tucson, Arizona pyan@email.arizona.edu; {hchen, zeng}@eller.arizona.edu",
"title": ""
},
{
"docid": "470ecc2bc4299d913125d307c20dd48d",
"text": "The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly [22,13]. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN) [21]. We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.",
"title": ""
},
{
"docid": "0f4d91623a7b9893d24c9dc9354f3dce",
"text": "We derive experimentally based estimates of the energy used by neural mechanisms to code known quantities of information. Biophysical measurements from cells in the blowfly retina yield estimates of the ATP required to generate graded (analog) electrical signals that transmit known amounts of information. Energy consumption is several orders of magnitude greater than the thermodynamic minimum. It costs 104 ATP molecules to transmit a bit at a chemical synapse, and 106 - 107 ATP for graded signals in an interneuron or a photoreceptor, or for spike coding. Therefore, in noise-limited signaling systems, a weak pathway of low capacity transmits information more economically, which promotes the distribution of information among multiple pathways.",
"title": ""
},
{
"docid": "1597874bef5c515e038584b3bf72f148",
"text": "This paper presents an overview of Text Summarization. Text Summarization is a challenging problem these days. Due to the great amount of information we are provided with and thanks to the development of Internet technologies, needs of producing summaries have become more and more widespread. Summarization is a very interesting and useful task that gives support to many other tasks as well as it takes advantage of the techniques developed for related Natural Language Processing tasks. The paper we present here may help us to have an idea of what Text Summarization is and how it can be useful for.",
"title": ""
},
{
"docid": "9237b82f1d127ab59a1a5e8f9fa7f86c",
"text": "Purpose: Enterprise social media platforms provide new ways of sharing knowledge and communicating within organizations to benefit from the social capital and valuable knowledge that employees have. Drawing on social dilemma and self‐determination theory, the aim of the study is to understand what factors drive employees’ participation and what factors hamper their participation in enterprise social media. Methodology: Based on a literature review, a unified research model is derived integrating demographic, individual, organizational and technological factors that influence the motivation of employees to share knowledge. The model is tested using statistical methods on a sample of 114 respondents in Denmark. Qualitative data is used to elaborate and explain quantitative results‘ findings. Practical implications: The proposed knowledge sharing framework helps to understand what factors impact engagement on social media. Furthermore the article suggests different types of interventions to overcome the social dilemma of knowledge sharing. Findings: Our findings pinpoint towards the general drivers and barriers to knowledge sharing within organizations. The significant drivers are: enjoy helping others, monetary rewards, management support, change of knowledge sharing behavior and recognition. The significant identified barriers to knowledge sharing are: change of behavior, lack of trust and lack of time. Originality: The study contributes to an understanding of factors leading to the success or failure of enterprise social media drawing on self‐determination and social dilemma theory.",
"title": ""
},
{
"docid": "e27575b8d7a7455f1a8f941adb306a04",
"text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yiseung@seas.upenn.edu Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: smcgill3@seas.upenn.edu Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: vlarry@seas.upenn.edu Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: heqin@seas.upenn.edu Inyong Ha Robotis, Seoul, Korea e-mail: dudung@robotis.com Jeakweon Han Robotis, Seoul, Korea e-mail: jkhan@robotis.com Hyunjong Song Robotis, Seoul, Korea e-mail: hjsong@robotis.com Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: mrouleau@vt.edu Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: btzhang@bi.snu.ac.kr Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: dennishong@ucla.edu Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yim@seas.upenn.edu Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: ddlee@seas.upenn.edu",
"title": ""
},
{
"docid": "2e99e535f2605e88571407142e4927ee",
"text": "Stability is a common tool to verify the validity of sample based algorithms. In clustering it is widely used to tune the parameters of the algorithm, such as the number k of clusters. In spite of the popularity of stability in practical applications, there has been very little theoretical analysis of this notion. In this paper we provide a formal definition of stability and analyze some of its basic properties. Quite surprisingly, the conclusion of our analysis is that for large sample size, stability is fully determined by the behavior of the objective function which the clustering algorithm is aiming to minimize. If the objective function has a unique global minimizer, the algorithm is stable, otherwise it is unstable. In particular we conclude that stability is not a well-suited tool to determine the number of clusters it is determined by the symmetries of the data which may be unrelated to clustering parameters. We prove our results for center-based clusterings and for spectral clustering, and support our conclusions by many examples in which the behavior of stability is counter-intuitive.",
"title": ""
},
{
"docid": "717ea3390ffe3f3132d4e2230e645ee5",
"text": "Much of what is known about physiological systems has been learned using linear system theory. However, many biomedical signals are apparently random or aperiodic in time. Traditionally, the randomness in biological signals has been ascribed to noise or interactions between very large numbers of constituent components. One of the most important mathematical discoveries of the past few decades is that random behavior can arise in deterministic nonlinear systems with just a few degrees of freedom. This discovery gives new hope to providing simple mathematical models for analyzing, and ultimately controlling, physiological systems. The purpose of this chapter is to provide a brief pedagogic survey of the main techniques used in nonlinear time series analysis and to provide a MATLAB tool box for their implementation. Mathematical reviews of techniques in nonlinear modeling and forecasting can be found in Refs. 1-5. Biomedical signals that have been analyzed using these techniques include heart rate [6-8], nerve activity [9], renal flow [10], arterial pressure [11], electroencephalogram [12], and respiratory waveforms [13]. Section 2 provides a brief overview of dynamical systems theory including phase space portraits, Poincare surfaces of section, attractors, chaos, Lyapunov exponents, and fractal dimensions. The forced Duffing-Van der Pol oscillator (a ubiquitous model in engineering problems) is investigated as an illustrative example. Section 3 outlines the theoretical tools for time series analysis using dynamical systems theory. Reliability checks based on forecasting and surrogate data are also described. The time series methods are illustrated using data from the time evolution of one of the dynamical variables of the forced Duffing-Van der Pol oscillator. Section 4 concludes with a discussion of possible future directions for applications of nonlinear time series analysis in biomedical processes.",
"title": ""
},
{
"docid": "f554af0d260de70f6efbc8fe8d64a357",
"text": "Hypocretin deficiency causes narcolepsy and may affect neuroendocrine systems and body composition. Additionally, growth hormone (GH) alterations my influence weight in narcolepsy. Symptoms can be treated effectively with sodium oxybate (SXB; γ-hydroxybutyrate) in many patients. This study compared growth hormone secretion in patients and matched controls and established the effect of SXB administration on GH and sleep in both groups. Eight male hypocretin-deficient patients with narcolepsy and cataplexy and eight controls matched for sex, age, BMI, waist-to-hip ratio, and fat percentage were enrolled. Blood was sampled before and on the 5th day of SXB administration. SXB was taken two times 3 g/night for 5 consecutive nights. Both groups underwent 24-h blood sampling at 10-min intervals for measurement of GH concentrations. The GH concentration time series were analyzed with AutoDecon and approximate entropy (ApEn). Basal and pulsatile GH secretion, pulse regularity, and frequency, as well as ApEn values, were similar in patients and controls. Administration of SXB caused a significant increase in total 24-h GH secretion rate in narcolepsy patients, but not in controls. After SXB, slow-wave sleep (SWS) and, importantly, the cross-correlation between GH levels and SWS more than doubled in both groups. In conclusion, SXB leads to a consistent increase in nocturnal GH secretion and strengthens the temporal relation between GH secretion and SWS. These data suggest that SXB may alter somatotropic tone in addition to its consolidating effect on nighttime sleep in narcolepsy. This could explain the suggested nonsleep effects of SXB, including body weight reduction.",
"title": ""
},
{
"docid": "690a2b067af8810d5da7d3389b7b4d78",
"text": "Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NPcomplete problem. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or deliver low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin,Fast-Lip) that are able to certify non-trivial lower bounds of minimum adversarial distortions. Experiments show that (1) our methods deliver bounds close to (the gap is 2-3X) exact minimum distortions found by Reluplex in small networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 3314,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that there is no polynomial time algorithm that can approximately find the minimum `1 adversarial distortion of a ReLU network with a 0.99 lnn approximation ratio unless NP=P, where n is the number of neurons in the network. Equal contribution Massachusetts Institute of Technology, Cambridge, MA UC Davis, Davis, CA Harvard University, Cambridge, MA UT Austin, Austin, TX. Source code is available at https://github.com/huanzhang12/CertifiedReLURobustness. Correspondence to: Tsui-Wei Weng <twweng@mit.edu>, Huan Zhang <huan@huan-zhang.com>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "4e23bf1c89373abaf5dc096f76c893f3",
"text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.",
"title": ""
},
{
"docid": "feeeb7bd9ed07917048cfd6bf0c3c6c7",
"text": "Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of crossdomain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domainspecific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-ofthe-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.",
"title": ""
},
{
"docid": "b04ae75e4f444b97976962a397ac413c",
"text": "In this paper the new topology DC/DC Boost power converter-inverter-DC motor that allows bidirectional rotation of the motor shaft is presented. In this direction, the system mathematical model is developed considering its different operation modes. Afterwards, the model validation is performed via numerical simulations by using Matlab-Simulink.",
"title": ""
},
{
"docid": "0b1310ac9630fa4a1c90dcf90d4ae327",
"text": "The Mirai Distributed Denial-of-Service (DDoS) attack exploited security vulnerabilities of Internet-of-Things (IoT) devices and thereby clearly signaled that attackers have IoT on their radar. Securing IoT is therefore imperative, but in order to do so it is crucial to understand the strategies of such attackers. For that purpose, in this paper, a novel IoT honeypot called ThingPot is proposed and deployed. Honeypot technology mimics devices that might be exploited by attackers and logs their behavior to detect and analyze the used attack vectors. ThingPot is the first of its kind, since it focuses not only on the IoT application protocols themselves, but on the whole IoT platform. A Proof-of-Concept is implemented with XMPP and a REST API, to mimic a Philips Hue smart lighting system. ThingPot has been deployed for 1.5 months and through the captured data we have found five types of attacks and attack vectors against smart devices. The ThingPot source code is made available as open source.",
"title": ""
},
{
"docid": "a01965406575363328f4dae4241a05b7",
"text": "IT governance is one of these concepts that suddenly emerged and became an important issue in the information technology area. Some organisations started with the implementation of IT governance in order to achieve a better alignment between business and IT. This paper interprets important existing theories, models and practices in the IT governance domain and derives research questions from it. Next, multiple research strategies are triangulated in order to understand how organisations are implementing IT governance in practice and to analyse the relationship between these implementations and business/IT alignment. Major finding is that organisations with more mature IT governance practices likely obtain a higher degree of business/IT alignment maturity.",
"title": ""
},
{
"docid": "322d23354a9bf45146e4cb7c733bf2ec",
"text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: brais.martinez@nottingham.ac.uk Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: michel.valstar@nottingham.ac.uk",
"title": ""
},
{
"docid": "a3308e4df796a74112b70c3244bd4d34",
"text": "Creative insight occurs with an “Aha!” experience when solving a difficult problem. Here, we investigated large-scale networks associated with insight problem solving. We recruited 232 healthy participants aged 21–69 years old. Participants completed a magnetic resonance imaging study (MRI; structural imaging and a 10 min resting-state functional MRI) and an insight test battery (ITB) consisting of written questionnaires (matchstick arithmetic task, remote associates test, and insight problem solving task). To identify the resting-state functional connectivity (RSFC) associated with individual creative insight, we conducted an exploratory voxel-based morphometry (VBM)-constrained RSFC analysis. We identified positive correlations between ITB score and grey matter volume (GMV) in the right insula and middle cingulate cortex/precuneus, and a negative correlation between ITB score and GMV in the left cerebellum crus 1 and right supplementary motor area. We applied seed-based RSFC analysis to whole brain voxels using the seeds obtained from the VBM and identified insight-positive/negative connections, i.e. a positive/negative correlation between the ITB score and individual RSFCs between two brain regions. Insight-specific connections included motor-related regions whereas creative-common connections included a default mode network. Our results indicate that creative insight requires a coupling of multiple networks, such as the default mode, semantic and cerebral-cerebellum networks.",
"title": ""
},
{
"docid": "a496f2683f49573132e5b57f7e3accf0",
"text": "Automatically generated databases of English paraphrases have the drawback that they return a single list of paraphrases for an input word or phrase. This means that all senses of polysemous words are grouped together, unlike WordNet which partitions different senses into separate synsets. We present a new method for clustering paraphrases by word sense, and apply it to the Paraphrase Database (PPDB). We investigate the performance of hierarchical and spectral clustering algorithms, and systematically explore different ways of defining the similarity matrix that they use as input. Our method produces sense clusters that are qualitatively and quantitatively good, and that represent a substantial improvement to the PPDB resource.",
"title": ""
},
{
"docid": "2b8296f8760e826046cd039c58026f83",
"text": "This study provided a descriptive and quantitative comparative analysis of data from an assessment protocol for adolescents referred clinically for gender identity disorder (n = 192; 105 boys, 87 girls) or transvestic fetishism (n = 137, all boys). The protocol included information on demographics, behavior problems, and psychosexual measures. Gender identity disorder and transvestic fetishism youth had high rates of general behavior problems and poor peer relations. On the psychosexual measures, gender identity disorder patients had considerably greater cross-gender behavior and gender dysphoria than did transvestic fetishism youth and other control youth. Male gender identity disorder patients classified as having a nonhomosexual sexual orientation (in relation to birth sex) reported more indicators of transvestic fetishism than did male gender identity disorder patients classified as having a homosexual sexual orientation (in relation to birth sex). The percentage of transvestic fetishism youth and male gender identity disorder patients with a nonhomosexual sexual orientation self-reported similar degrees of behaviors pertaining to transvestic fetishism. Last, male and female gender identity disorder patients with a homosexual sexual orientation had more recalled cross-gender behavior during childhood and more concurrent cross-gender behavior and gender dysphoria than did patients with a nonhomosexual sexual orientation. The authors discuss the clinical utility of their assessment protocol.",
"title": ""
}
] |
scidocsrr
|
eec3eb04fab51ee319006a1df6b909a4
|
Social Interactions in Massively Multiplayer Online Role-Playing Gamers
|
[
{
"docid": "01b9bf49c88ae37de79b91edeae20437",
"text": "While online, some people self-disclose or act out more frequently or intensely than they would in person. This article explores six factors that interact with each other in creating this online disinhibition effect: dissociative anonymity, invisibility, asynchronicity, solipsistic introjection, dissociative imagination, and minimization of authority. Personality variables also will influence the extent of this disinhibition. Rather than thinking of disinhibition as the revealing of an underlying \"true self,\" we can conceptualize it as a shift to a constellation within self-structure, involving clusters of affect and cognition that differ from the in-person constellation.",
"title": ""
}
] |
[
{
"docid": "57a76d88f9344c4cc4ecbe9c4284e144",
"text": "Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. The aim of this survey is two fold, firstly we present a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore, we review the adoption of these methods for anomaly across various application domains and asess their effectiveness. We have grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted. Within each category we outline the basic anomaly detection technique, alongwith its variants and present key assumptions, to differentiate between normal and anomalous behavior. For each category we present we also present the advantages and limitations and discuss the computational complexity of the techniques in real application domains. Finally, we outline open issues in research and challenges faced while adopting these techniques.",
"title": ""
},
{
"docid": "ef87998cfef1a7637a7ce1f2e2a8e0d8",
"text": "Zero Shot Learning (ZSL) enables a learning model to classify instances of an unseen class during training. While most research in ZSL focuses on single-label classification, few studies have been done in multi-label ZSL, where an instance is associated with a set of labels simultaneously, due to the difficulty in modeling complex semantics conveyed by a set of labels. In this paper, we propose a novel approach to multi-label ZSL via concept embedding learned from collections of public users’ annotations of multimedia. Thanks to concept embedding, multi-label ZSL can be done by efficiently mapping an instance input features onto the concept embedding space in a similar manner used in single-label ZSL. Moreover, our semantic learning model is capable of embedding an out-of-vocabulary label by inferring its meaning from its co-occurring labels. Thus, our approach allows both seen and unseen labels during the concept embedding learning to be used in the aforementioned instance mapping, which makes multi-label ZSL more flexible and suitable for real applications. Experimental results of multilabel ZSL on images and music tracks suggest that our approach outperforms a state-of-the-art multi-label ZSL model and can deal with a scenario involving out-of-vocabulary labels without re-training the semantics learning model.",
"title": ""
},
{
"docid": "d4e3140142dde965e1dafe318a0e5aff",
"text": "An exclusive focus on bottom-line income misses important information about the quality of earnings. Accruals (the difference between accounting earnings and cash flow) are reliably, negatively associated with future stock returns. Earnings increases that are accompanied by high accruals, suggesting low-quality earnings, are associated with poor future returns. We explore various hypotheses — earnings manipulation, extrapolative biases about future growth, and under-reaction to changes in business conditions — to explain accruals’ predictive power. Distinctions between the hypotheses are based on evidence from operating performance, the behavior of individual accrual items, discretionary versus nondiscretionary components of accruals, and special items. We check for robustness using within-industry comparisons, and data on U.K.",
"title": ""
},
{
"docid": "349f24f645b823a7b0cc411d5e2a308e",
"text": "In this paper, the analysis and design of an asymmetrical half bridge flyback DC-DC converter is presented, which can minimize the switching power loss by realizing the zero-voltage switching (ZVS) during the transition between the two switches and the zero-current-switching (ZCS) on the output diode. As a result, high efficiency can be achieved. The principle of the converter operation is explained and analyzed. In order to ensure the realization of ZVS in operation, the required interlock delay time between the gate signals of the two switches, the transformer leakage inductance, and the ZVS range of the output current variation are properly calculated. Experimental results from a 8 V/8 A, 200 kHz circuit are also presented, which verify the theoretical analysis.",
"title": ""
},
{
"docid": "18a72ae6ab7c6b7745532eefa3021001",
"text": "In this paper, we design and develop RIO, a novel battery-free touch sensing user interface (UI) primitive for future IoT and smart spaces. RIO enables UIs to be constructed using off-the-shelf RFID readers and tags, and provides a unique approach to designing smart IoT spaces. With RIO, any surface can be turned into a touch-aware surface by simply attaching RFID tags to them. RIO also supports custom-designed RFID tags, and thus allows specially customized UIs to be easily deployed into a real-world environment. RIO is built using the technique of impedance tracking: when a human finger touches the surface of an RFID tag, the impedance of the antenna changes. This change manifests as a change in the phase of the RFID backscattered signal, and is used by RIO to track fine-grained touch movement over both off-the shelf and custom built tags. We study this impedance behavior in-depth and show how RIO is a reliable UI primitive that is robust even within a multi-tag environment. We leverage this primitive to build a prototype of RIO that can continuously locate a finger during a swipe movement to within 3 mm of its actual position. We also show how custom-design RFID tags can be built and used with RIO, and provide two example applications that demonstrate its real-world use.",
"title": ""
},
{
"docid": "3bb9fc6e09c9ce13252a04d6978d1bfc",
"text": "Recently, sparse coding has been successfully applied in visual tracking. The goal of this paper is to review the state-of-the-art tracking methods based on sparse coding. We first analyze the benefits of using sparse coding in visual tracking and then categorize these methods into appearance modeling based on sparse coding (AMSC) and target searching based on sparse representation (TSSR) as well as their combination. For each categorization, we introduce the basic framework and subsequent improvements with emphasis on their advantages and disadvantages. Finally, we conduct extensive experiments to compare the representative methods on a total of 20 test sequences. The experimental results indicate that: (1) AMSC methods significantly outperform TSSR methods. (2) For AMSC methods, both discriminative dictionary and spatial order reserved pooling operators are important for achieving high tracking accuracy. (3) For TSSR methods, the widely used identity pixel basis will degrade the performance when the target or candidate images are not aligned well or severe occlusion occurs. (4) For TSSR methods, ‘1 norm minimization is not necessary. In contrast, ‘2 norm minimization can obtain comparable performance but with lower computational cost. The open questions and future research topics are also discussed. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "893c7a1694596d0c8d58b819500ff9f9",
"text": "A recently introduced deep neural network (DNN) has achieved some unprecedented gains in many challenging automatic speech recognition (ASR) tasks. In this paper deep neural network hidden Markov model (DNN-HMM) acoustic models is introduced to phonotactic language recognition and outperforms artificial neural network hidden Markov model (ANN-HMM) and Gaussian mixture model hidden Markov model (GMM-HMM) acoustic model. Experimental results have confirmed that phonotactic language recognition system using DNN-HMM acoustic model yields relative equal error rate reduction of 28.42%, 14.06%, 18.70% and 12.55%, 7.20%, 2.47% for 30s, 10s, 3s comparing with the ANN-HMM and GMM-HMM approaches respectively on National Institute of Standards and Technology language recognition evaluation (NIST LRE) 2009 tasks.",
"title": ""
},
{
"docid": "22a8e467c97ffa7896d7fbbe700debbb",
"text": "Automated detection and 3D modelling of objects in laser range data is of great importance in many app lications. Existing approaches to object detection in range data are li mited to either 2.5D data (e.g. range images) or si mple objects with a parametric form (e.g. spheres). This paper describes a new app ro ch to the detection of 3D objects with arbitrary shapes in a point cloud. We present an extension of the generalized Hough trans form to 3D data, which can be used to detect instan ces of an object model in laser range data, independent of the scale and orientatio of the object. We also discuss the computational complexity of the method and provide cost-reduction strategies that can be emplo yed to improve the efficiency of the method.",
"title": ""
},
{
"docid": "5701585d5692b4b28da3132f4094fc9f",
"text": "We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.",
"title": ""
},
{
"docid": "00547f45936c7cea4b7de95ec1e0fbcd",
"text": "With the emergence of the Internet of Things (IoT) and Big Data era, many applications are expected to assimilate a large amount of data collected from environment to extract useful information. However, how heterogeneous computing devices of IoT ecosystems can execute the data processing procedures has not been clearly explored. In this paper, we propose a framework which characterizes energy and performance requirements of the data processing applications across heterogeneous devices, from a server in the cloud and a resource-constrained gateway at edge. We focus on diverse machine learning algorithms which are key procedures for handling the large amount of IoT data. We build analytic models which automatically identify the relationship between requirements and data in a statistical way. The proposed framework also considers network communication cost and increasing processing demand. We evaluate the proposed framework on two heterogenous devices, a Raspberry Pi and a commercial Intel server. We show that the identified models can accurately estimate performance and energy requirements with less than error of 4.8% for both platforms. Based on the models, we also evaluate whether the resource-constrained gateway can process the data more efficiently than the server in the cloud. The results present that the less-powerful device can achieve better energy and performance efficiency for more than 50% of machine learning algorithms.",
"title": ""
},
{
"docid": "ee9e24f38d7674e601ab13b73f3d37db",
"text": "This paper presents the design of an application specific hardware for accelerating High Frequency Trading applications. It is optimized to achieve the lowest possible latency for interpreting market data feeds and hence enable minimal round-trip times for executing electronic stock trades. The implementation described in this work enables hardware decoding of Ethernet, IP and UDP as well as of the FAST protocol which is a common protocol to transmit market feeds. For this purpose, we developed a microcode engine with a corresponding instruction set as well as a compiler which enables the flexibility to support a wide range of applied trading protocols. The complete system has been implemented in RTL code and evaluated on an FPGA. Our approach shows a 4x latency reduction in comparison to the conventional Software based approach.",
"title": ""
},
{
"docid": "6db08236408b70e6d22503dde2104c2f",
"text": "Accurate and robust segmentation of abdominal organs on CT is essential for many clinical applications such as computer-aided diagnosis and computer-aided surgery. But this task is challenging due to the weak boundaries of organs, the complexity of the background, and the variable sizes of different organs. To address these challenges, we introduce a novel framework for multi-organ segmentation of abdominal regions by using organ-attention networks with reverse connections (OAN-RCs) which are applied to 2D views, of the 3D CT volume, and output estimates which are combined by statistical fusion exploiting structural similarity. More specifically, OAN is a two-stage deep convolutional network, where deep network features from the first stage are combined with the original image, in a second stage, to reduce the complex background and enhance the discriminative information for the target organs. Intuitively, OAN reduces the effect of the complex background by focusing attention so that each organ only needs to be discriminated from its local background. RCs are added to the first stage to give the lower layers more semantic information thereby enabling them to adapt to the sizes of different organs. Our networks are trained on 2D views (slices) enabling us to use holistic information and allowing efficient computation (compared to using 3D patches). To compensate for the limited cross-sectional information of the original 3D volumetric CT, e.g., the connectivity between neighbor slices, multi-sectional images are reconstructed from the three different 2D view directions. Then we combine the segmentation results from the different views using statistical fusion, with a novel term relating the structural similarity of the 2D views to the original 3D structure. To train the network and evaluate results, 13 structures were manually annotated by four human raters and confirmed by a senior expert on 236 normal cases. We tested our algorithm by 4-fold cross-validation and computed Dice-Sørensen similarity coefficients (DSC) and surface distances for evaluating our estimates of the 13 structures. Our experiments show that the proposed approach gives strong results and outperforms 2Dand 3D-patch based state-of-the-art methods in terms of DSC and mean surface distances.",
"title": ""
},
{
"docid": "e3a2b7d38a777c0e7e06d2dc443774d5",
"text": "The area under the ROC (Receiver Operating Characteristic) curve, or simply AUC, has been widely used to measure model performance for binary classification tasks. It can be estimated under parametric, semiparametric and nonparametric assumptions. The non-parametric estimate of the AUC, which is calculated from the ranks of predicted scores of instances, does not always sufficiently take advantage of the predicted scores. This problem is tackled in this paper. On the basis of the ranks and the original values of the predicted scores, we introduce a new metric, called a scored AUC or sAUC. Experimental results on 20 UCI data sets empirically demonstrate the validity of the new metric for classifier evaluation and selection.",
"title": ""
},
{
"docid": "b24e5a512306f24568f3e21af08a1faf",
"text": "We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300×300 achieved 78.5% mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512×512 sized input achieved 80.8% mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.",
"title": ""
},
{
"docid": "6f1fc6a07d0beb235f5279e17a46447f",
"text": "Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.",
"title": ""
},
{
"docid": "01ff7e55830977622482ab018acd2cfe",
"text": "Dictionary learning has been widely used in many image processing tasks. In most of these methods, the number of basis vectors is either set by experience or coarsely evaluated empirically. In this paper, we propose a new scale adaptive dictionary learning framework, which jointly estimates suitable scales and corresponding atoms in an adaptive fashion according to the training data, without the need of prior information. We design an atom counting function and develop a reliable numerical scheme to solve the challenging optimization problem. Extensive experiments on texture and video data sets demonstrate quantitatively and visually that our method can estimate the scale, without damaging the sparse reconstruction ability.",
"title": ""
},
{
"docid": "f1c1a0baa9f96d841d23e76b2b00a68d",
"text": "Introduction to Derivative-Free Optimization Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. 2009 · xii + 277 pages · Softcover · ISBN 978-0-898716-68-9 List Price $73.00 · RUNDBRIEF Price $51.10 · Code MP08",
"title": ""
},
{
"docid": "1a446469e6b4357373b61f88255407cf",
"text": "In the Western Hemisphere, Zika virus is thought to be transmitted primarily by Aedes aegypti mosquitoes. To determine the extent to which Ae. albopictus mosquitoes from the United States are capable of transmitting Zika virus and the influence of virus dose, virus strain, and mosquito species on vector competence, we evaluated multiple doses of representative Zika virus strains in Ae. aegypti and Ae. albopictus mosquitoes. Virus preparation (fresh vs. frozen) significantly affected virus infectivity in mosquitoes. We calculated 50% infectious doses to be 6.1-7.5 log 10 PFU/mL; minimum infective dose was 4.2 log 10 PFU/mL. Ae. albopictus mosquitoes were more susceptible to infection than Ae. aegypti mosquitoes, but transmission efficiency was higher for Ae. aegypti mosquitoes, indicating a transmission barrier in Ae. albopictus mosquitoes. Results suggest that, although Zika virus transmission is relatively inefficient overall and dependent on virus strain and mosquito species, Ae. albopictus mosquitoes could become major vectors in the Americas.",
"title": ""
},
{
"docid": "02d5de2ea87f5bcf27e45fc073fc6b23",
"text": "Sentiment analysis aims to extract users’ opinions from review documents. Nowadays, there are two main approaches for sentiment analysis: the semantic orientation and the machine learning. Sentiment analysis approaches based on Machine Learning (ML) methods work over a set of features extracted from the users’ opinions. However, the high dimensionality of the feature vector reduces the effectiveness of this approach. In this sense, we propose a sentiment classification method based on feature selection mechanisms and ML methods. The present method uses a hybrid feature extraction method based on POS pattern and dependency parsing. The features obtained are enriched semantically through commonsense knowledge bases. Then, a feature selection method is applied to eliminate the noisy and irrelevant features. Finally, a set of classifiers is trained in order to classify unknown data. To prove the effectiveness of our approach, we have conducted an evaluation in the movies and technological products domains. Also, our proposal was compared with well-known methods and algorithms used on the sentiment classification field. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.786 to 0.898 for the aforementioned domains.",
"title": ""
},
{
"docid": "6bdcd13e63a4f24561f575efcd232dad",
"text": "Men have called me mad,” wrote Edgar Allan Poe, “but the question is not yet settled, whether madness is or is not the loftiest intelligence— whether much that is glorious—whether all that is profound—does not spring from disease of thought—from moods of mind exalted at the expense of the general intellect.” Many people have long shared Poe’s suspicion that genius and insanity are entwined. Indeed, history holds countless examples of “that fine madness.” Scores of influential 18thand 19th-century poets, notably William Blake, Lord Byron and Alfred, Lord Tennyson, wrote about the extreme mood swings they endured. Modern American poets John Berryman, Randall Jarrell, Robert Lowell, Sylvia Plath, Theodore Roethke, Delmore Schwartz and Anne Sexton were all hospitalized for either mania or depression during their lives. And many painters and composers, among them Vincent van Gogh, Georgia O’Keeffe, Charles Mingus and Robert Schumann, have been similarly afflicted. Judging by current diagnostic criteria, it seems that most of these artists—and many others besides—suffered from one of the major mood disorders, namely, manic-depressive illness or major depression. Both are fairly common, very treatable and yet frequently lethal diseases. Major depression induces intense melancholic spells, whereas manic-depression, Manic-Depressive Illness and Creativity",
"title": ""
}
] |
scidocsrr
|
24d35363f266edd3f24ae8499731f87c
|
Polymorphic Malware Detection Using Sequence Classification Methods
|
[
{
"docid": "4fa73e04ccc8620c12aaea666ea366a6",
"text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike",
"title": ""
},
{
"docid": "99093a77acc305adbd0caf577d46a26b",
"text": "MOTIVATION\nCounting the number of occurrences of every k-mer (substring of length k) in a long string is a central subproblem in many applications, including genome assembly, error correction of sequencing reads, fast multiple sequence alignment and repeat detection. Recently, the deep sequence coverage generated by next-generation sequencing technologies has caused the amount of sequence to be processed during a genome project to grow rapidly, and has rendered current k-mer counting tools too slow and memory intensive. At the same time, large multicore computers have become commonplace in research facilities allowing for a new parallel computational paradigm.\n\n\nRESULTS\nWe propose a new k-mer counting algorithm and associated implementation, called Jellyfish, which is fast and memory efficient. It is based on a multithreaded, lock-free hash table optimized for counting k-mers up to 31 bases in length. Due to their flexibility, suffix arrays have been the data structure of choice for solving many string problems. For the task of k-mer counting, important in many biological applications, Jellyfish offers a much faster and more memory-efficient solution.\n\n\nAVAILABILITY\nThe Jellyfish software is written in C++ and is GPL licensed. It is available for download at http://www.cbcb.umd.edu/software/jellyfish.",
"title": ""
}
] |
[
{
"docid": "c21760fd43f8241f53f74d0126c53fa3",
"text": "OBJECTIVE\nTo assess the labial and lingual alveolar bone thickness in adults with maxillary central incisors of different inclination by cone-beam computed tomography (CBCT).\n\n\nMETHODS\nNinety maxillary central incisors from 45 patients were divided into three groups based on the maxillary central incisors to palatal plane angle; lingual-inclined, normal, and labial-inclined. Reformatted CBCT images were used to measure the labial and lingual alveolar bone thickness (ABT) at intervals corresponding to every 1/10 of the root length. The sum of labial ABT and lingual ABT at the level of the root apex was used to calculate the total ABT (TABT). The number of teeth exhibiting alveolar fenestration and dehiscence in each group was also tallied. One-way analysis of variance and Tukey's honestly significant difference test were applied for statistical analysis.\n\n\nRESULTS\nThe labial ABT and TABT values at the root apex in the lingual-inclined group were significantly lower than in the other groups (p < 0.05). Lingual and labial ABT values were very low at the cervical level in the lingual-inclined and normal groups. There was a higher prevalence of alveolar fenestration in the lingual-inclined group.\n\n\nCONCLUSIONS\nLingual-inclined maxillary central incisors have less bone support at the level of the root apex and a greater frequency of alveolar bone defects than normal maxillary central incisors. The bone plate at the marginal level is also very thin.",
"title": ""
},
{
"docid": "72a44b022df79077d6c5f4dd472b9fe9",
"text": "The minimal state of consciousness is sentience. This includes any phenomenal sensory experience - exteroceptive, such as vision and olfaction; interoceptive, such as pain and hunger; or proprioceptive, such as the sense of bodily position and movement. We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience). We end with a discussion of the implications of our proposal for the distribution of consciousness in the animal kingdom, suggesting testable predictions, and revisiting the ongoing debate about the function of minimal consciousness in light of our approach.",
"title": ""
},
{
"docid": "33dcb0fd31109555e21a224f981d01cc",
"text": "A new printed quadrifilar helix antenna with integrated feed network was designed for satellite mobile communication systems. The input impedance of this antenna in each port is about 50Ω without a matching circuit. It makes the feed network simplified and miniaturized. Parameter tradeoffs have a direct bearing on antenna performance in antenna design. A trial and error approach to this problem is both costly and unreliable. The purpose of this work is to use computer electromagnetic simulation to predict antenna properties for optimum performance. The proposed antenna displays small size, light weight, low cost, large bandwidth, hemispherical radiation pattern and excellent circular polarization.",
"title": ""
},
{
"docid": "5d827a27d9fb1fe4041e21dde3b8ce44",
"text": "Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.",
"title": ""
},
{
"docid": "741a897b87cc76d68f5400974eee6b32",
"text": "Numerous techniques exist to augment the security functionality of Commercial O -The-Shelf (COTS) applications and operating systems, making them more suitable for use in mission-critical systems. Although individually useful, as a group these techniques present di culties to system developers because they are not based on a common framework which might simplify integration and promote portability and reuse. This paper presents techniques for developing Generic Software Wrappers { protected, non-bypassable kernel-resident software extensions for augmenting security without modi cation of COTS source. We describe the key elements of our work: our high-level Wrapper De nition Language (WDL), and our framework for con guring, activating, and managing wrappers. We also discuss code reuse, automatic management of extensions, a framework for system-building through composition, platform-independence, and our experiences with our Solaris and FreeBSD prototypes.",
"title": ""
},
{
"docid": "dbaadbff5d9530c3b33ae1231eeec217",
"text": "A group of 1st-graders who were administered a battery of reading tasks in a previous study were followed up as 11th graders. Ten years later, they were administered measures of exposure to print, reading comprehension, vocabulary, and general knowledge. First-grade reading ability was a strong predictor of all of the 11th-grade outcomes and remained so even when measures of cognitive ability were partialed out. First-grade reading ability (as well as 3rd- and 5th-grade ability) was reliably linked to exposure to print, as assessed in the 11th grade, even after 11th-grade reading comprehension ability was partialed out, indicating that the rapid acquisition of reading ability might well help develop the lifetime habit of reading, irrespective of the ultimate level of reading comprehension ability that the individual attains. Finally, individual differences in exposure to print were found to predict differences in the growth in reading comprehension ability throughout the elementary grades and thereafter.",
"title": ""
},
{
"docid": "d817f4d8ba1eb4c4318008edcd3e1f8b",
"text": "This paper presents the development of a perception system for indoor environments to allow autonomous navigation for surveillance mobile robots. The system is composed by two parts. The first part is a reactive navigation system in which a mobile robot moves avoiding obstacles in environment, using the distance sensor Kinect. The second part of this system uses a artificial neural network (ANN) to recognize different configurations of the environment, for example, path ahead, left path, right path and intersections. The ANN is trained using data captured by the Kinect sensor in indoor environments. This way, the robot becomes able to perform a topological navigation combining internal reactive behavior to avoid obstacles and the ANN to locate the robot in the environment, in a deliberative behavior. The topological map is represented by a graph which represents the configuration of the environment, where the hallways (path ahead) are the edges and locations (left path and intersection, for example) are the vertices. The system also works in the dark, which is a great advantage for surveillance systems. The experiments were performed with a Pioneer P3-AT robot equipped with a Kinect sensor in order to validate and evaluate this approach. The proposed method demonstrated to be a promising approach to autonomous mobile robots navigation.",
"title": ""
},
{
"docid": "1094c5dfc72a27324753af3891b45369",
"text": "Recent studies demonstrate the effectiveness of Recurrent Neural Networks (RNNs) for action recognition in videos. However, previous works mainly utilize video-level category as supervision to train RNNs, which may prohibit RNNs to learn complex motion structures along time. In this paper, we propose a recurrent pose-attention network (RPAN) to address this challenge, where we introduce a novel pose-attention mechanism to adaptively learn pose-related features at every time-step action prediction of RNNs. More specifically, we make three main contributions in this paper. Firstly, unlike previous works on pose-related action recognition, our RPAN is an end-toend recurrent network which can exploit important spatialtemporal evolutions of human pose to assist action recognition in a unified framework. Secondly, instead of learning individual human-joint features separately, our poseattention mechanism learns robust human-part features by sharing attention parameters partially on the semanticallyrelated human joints. These human-part features are then fed into the human-part pooling layer to construct a highlydiscriminative pose-related representation for temporal action modeling. Thirdly, one important byproduct of our RPAN is pose estimation in videos, which can be used for coarse pose annotation in action videos. We evaluate the proposed RPAN quantitatively and qualitatively on two popular benchmarks, i.e., Sub-JHMDB and PennAction. Experimental results show that RPAN outperforms the recent state-of-the-art methods on these challenging datasets.",
"title": ""
},
{
"docid": "4ae2d7ccfb3bbfc8dedd2715f73f823b",
"text": "Mindfulness- based Cognitive Therapy (MBCT) is a meditation program based on an integration of Cognitive behavioural therapy and Mindfulness-based stress reduction. The aim of the present work is to review and conduct a meta-analysis of the current findings about the efficacy of MBCT for psychiatric patients. A literature search was undertaken using five electronic databases and references of retrieved articles. Main findings included the following: 1) MBCT in adjunct to usual care was significantly better than usual care alone for reducing major depression (MD) relapses in patients with three or more prior depressive episodes (4 studies), 2) MBCT plus gradual discontinuation of maintenance ADs was associated to similar relapse rates at 1year as compared with continuation of maintenance antidepressants (1 study), 3) the augmentation of MBCT could be useful for reducing residual depressive symptoms in patients with MD (2 studies) and for reducing anxiety symptoms in patients with bipolar disorder in remission (1 study) and in patients with some anxiety disorders (2 studies). However, several methodological shortcomings including small sample sizes, non-randomized design of some studies and the absence of studies comparing MBCT to control groups designed to distinguish specific from non-specific effects of such practice underscore the necessity for further research.",
"title": ""
},
{
"docid": "2bdf19ecf701eae1b3e9c3f9cf81387d",
"text": "Log file correlation is related to two distinct activities: Intrusion Detection and Network Forensics. It is more important than ever that these two disciplines work together in a mutualistic relationship in order to avoid Points of Failure. This paper, intended as a tutorial for those dealing with such issues, presents an overview of log analysis and correlation, with special emphasis on the tools and techniques for managing them within a network forensics context. In particular it will cover the most important parts of Log Analysis and correlation, starting from the Acquisition Process until the analysis.",
"title": ""
},
{
"docid": "853b5ab3ed6a9a07c8d11ad32d0e58ad",
"text": "We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series modelshidden Markov models and linear dynamical systemsand is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.",
"title": ""
},
{
"docid": "3dcfcaa97fcc1bce04ce515027e64927",
"text": "Abs t rac t . RoboCup is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and exaznined. The first R o b o C u p competition was held at IJCAI-97, Nagoya. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensorfllsion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup's final target is a world cup with real robots, RoboCup offers a softwaxe platform for research on the software aspects of RoboCup. This paper describes technical chalhmges involw~d in RoboCup, rules, and simulation environment.",
"title": ""
},
{
"docid": "03e82d63b105a4ffd9af8a5fc473b5ed",
"text": "This paper describes a lumped-element 5-way Wilkinson power divider with broadband characteristics. The circuit contains multi-section LC-ladder circuits between input and output ports, and each output port is connected through series RLC circuits. By designing the divider based on multi-section matching transformer and L-section matching network techniques, the proposed 5-way divider can achieve broadband characteristics. In order to verify the design procedure, the proposed divider was designed and fabricated at a center frequency of 300MHz. The fabricated divider exhibited broadband characteristics with a relative bandwidth of about 75%.",
"title": ""
},
{
"docid": "c96e8afc0c3e0428a257ba044cd2a35a",
"text": "The tumor necrosis factor ligand superfamily member receptor activator of nuclear factor-kB (NF-kB) ligand (RANKL), its cellular receptor, receptor activator of NF-kB (RANK), and the decoy receptor, osteoprotegerin (OPG) represent a novel cytokine triad with pleiotropic effects on bone metabolism, the immune system, and endocrine functions (1). RANKL is produced by osteoblastic lineage cells and activated T lymphocytes (2– 4) and stimulates its receptor, RANK, which is located on osteoclasts and dendritic cells (DC) (4, 5). The effects of RANKL within the skeleton include osteoblast –osteoclast cross-talks, resulting in enhanced differentiation, fusion, activation, and survival of osteoclasts (3, 6), while in the immune system, RANKL promotes the survival and immunostimulatory capacity of DC (1, 7). OPG acts as a soluble decoy receptor that neutralizes RANKL, thus preventing activation of RANK (8). The RANKL/RANK/OPG system has been implicated in various skeletal and immune-mediated diseases characterized by increased bone resorption and bone loss, including several forms of osteoporosis (postmenopausal, glucocorticoid-induced, and senile osteoporosis) (9), bone metastases (10), periodontal disease (11), and rheumatoid arthritis (2). While a relative deficiency of OPG has been found to be associated with osteoporosis in various animal models (9), the parenteral administration of OPG to postmenopausal women (3 mg/kg) was beneficial in rapidly reducing enhanced biochemical markers of bone turnover by 30–80% (12). These studies have clearly established the RANKL/ OPG system as a key cytokine network involved in the regulation of bone cell biology, osteoblast–osteoclast and bone-immune cross-talks, and maintenance of bone mass. In addition to providing substantial and detailed insights into the pathogenesis of various metabolic bone diseases, the administration of OPG may become a promising therapeutic option in the prevention and treatment of benign and malignant bone disease. Several studies have attempted to evaluate the clinical relevance and potential applications of serum OPG measurements in humans. Yano et al. were the first to assess systematically OPG serum levels (by an ELISA system) in women with osteoporosis (13). Intriguingly, OPG serum levels were negatively correlated with bone mineral density (BMD) at various sites (lumbar spine, femoral neck, and total body) and positively correlated with biochemical markers of bone turnover. In view of the established protective effects of OPG on bone, these findings came as a surprise, and were interpreted as an insufficient counter-regulatory mechanism to prevent bone loss. Another group which employed a similar design (but a different OPG ELISA system) could not detect a correlation between OPG serum levels and biochemical markers of bone turnover (14), but confirmed the negative correlation of OPG serum concentrations with BMD in postmenopausal women (15). In a recent study, Szulc and colleagues (16) evaluated OPG serum levels in an age-stratified male cohort, and observed positive correlations of OPG serum levels with bioavailable testosterone and estrogen levels, negative correlations with parathyroid hormone (PTH) serum levels and urinary excretion of total deoxypyridinoline, but no correlation with BMD at any site (16). The finding that PTH serum levels and gene expression of OPG by bone cells are inversely correlated was also reported in postmenopausal women (17), and systemic administration of human PTH(1-34) to postmenopausal women with osteoporosis inhibited circulating OPG serum levels (18). Finally, a study of patients with renal diseases showed a decline of serum OPG levels following initiation of systemic glucocorticoid therapy (19). The regulation pattern of OPG by systemic hormones has been described in vitro, and has led to the hypothesis that most hormones and cytokines regulate bone resorption by modulating either RANKL, OPG, or both (9). Interestingly, several studies showed that serum OPG levels increased with ageing and were higher in postmenopausal women (who have an increased rate of bone loss) as compared with men, thus supporting the hypothesis of a counter-regulatory function of OPG in order to prevent further bone loss (13 –16). In this issue of the Journal, Ueland and associates (20) add another important piece to the picture of OPG regulation in humans in vivo. By studying well-characterized patient cohorts with endocrine and immune diseases such as Cushing’s syndrome, acromegaly, growth hormone deficiency, HIV infection, and common variable immunodeficiency (CVI), the investigators reported European Journal of Endocrinology (2001) 145 681–683 ISSN 0804-4643",
"title": ""
},
{
"docid": "d396f95b96ba06154effb6df6991a092",
"text": "Wireless networks have become the main form of Internet access. Statistics show that the global mobile Internet penetration should exceed 70% until 2019. Wi-Fi is an important player in this change. Founded on IEEE 802.11, this technology has a crucial impact in how we share broadband access both in domestic and corporate networks. However, recent works have indicated performance issues in Wi-Fi networks, mainly when they have been deployed without planning and under high user density. Hence, different collision avoidance techniques and Medium Access Control protocols have been designed in order to improve Wi-Fi performance. Analyzing the collision problem, this work strengthens the claims found in the literature about the low Wi-Fi performance under dense scenarios. Then, in particular, this article overviews the MAC protocols used in the IEEE 802.11 standard and discusses solutions to mitigate collisions. Finally, it contributes presenting future trends in MAC protocols. This assists in foreseeing expected improvements for the next generation of Wi-Fi devices.",
"title": ""
},
{
"docid": "e22f9516948725be20d8e331d5bafa56",
"text": "Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in automatic surveillance of electrical infrastructure. For an automatic vision-based power line inspection system, detecting power lines from a cluttered background is one of the most important and challenging tasks. In this paper, a novel method is proposed, specifically for power line detection from aerial images. A pulse coupled neural filter is developed to remove background noise and generate an edge map prior to the Hough transform being employed to detect straight lines. An improved Hough transform is used by performing knowledge-based line clustering in Hough space to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective for automatic power line detection.",
"title": ""
},
{
"docid": "f634f47b4d0ffe84c9f22c9e58b783dc",
"text": "BACKGROUND\nDescriptive statistics are an essential part of biometric analysis and a prerequisite for the understanding of further statistical evaluations, including the drawing of inferences. When data are well presented, it is usually obvious whether the author has collected and evaluated them correctly and in keeping with accepted practice in the field.\n\n\nMETHODS\nStatistical variables in medicine may be of either the metric (continuous, quantitative) or categorical (nominal, ordinal) type. Easily understandable examples are given. Basic techniques for the statistical description of collected data are presented and illustrated with examples.\n\n\nRESULTS\nThe goal of a scientific study must always be clearly defined. The definition of the target value or clinical endpoint determines the level of measurement of the variables in question. Nearly all variables, whatever their level of measurement, can be usefully presented graphically and numerically. The level of measurement determines what types of diagrams and statistical values are appropriate. There are also different ways of presenting combinations of two independent variables graphically and numerically.\n\n\nCONCLUSIONS\nThe description of collected data is indispensable. If the data are of good quality, valid and important conclusions can already be drawn when they are properly described. Furthermore, data description provides a basis for inferential statistics.",
"title": ""
},
{
"docid": "6aed3ffa374139fa9c4e0b7c1afb7841",
"text": "Recent longitudinal and cross-sectional aging research has shown that personality traits continue to change in adulthood. In this article, we review the evidence for mean-level change in personality traits, as well as for individual differences in change across the life span. In terms of mean-level change, people show increased selfconfidence, warmth, self-control, and emotional stability with age. These changes predominate in young adulthood (age 20-40). Moreover, mean-level change in personality traits occurs in middle and old age, showing that personality traits can change at any age. In terms of individual differences in personality change, people demonstrate unique patterns of development at all stages of the life course, and these patterns appear to be the result of specific life experiences that pertain to a person's stage of life.",
"title": ""
},
{
"docid": "a2a8228b27b066fca497ddc2fa8b323e",
"text": "Digital Image Processing has found to be useful in many domains. In sports, it can either be used as an analytical tool to determine strategic instances in a game or can be used in the broadcast of video to television viewers. Modern day coverage of sports involves multiple cameras and an array of technologies to support it, since manually going through every video coming to a station would be a near-impossible task, a wide range of Digital Image Processing algorithms are applied to do the same. Highlight Generation and Event Detection are the foremost areas in sports where a multitude of DIP algorithms exist. This study provides an insight into the applications of Digital Image Processing in Sports, concentrating on algorithms related to video broadcast while listing their advantages and drawbacks.",
"title": ""
},
{
"docid": "050443f5d84369f942c3f611775d37ed",
"text": "A variety of methods for computing factor scores can be found in the psychological literature. These methods grew out of a historic debate regarding the indeterminate nature of the common factor model. Unfortunately, most researchers are unaware of the indeterminacy issue and the problems associated with a number of the factor scoring procedures. This article reviews the history and nature of factor score indeterminacy. Novel computer programs for assessing the degree of indeterminacy in a given analysis, as well as for computing and evaluating different types of factor scores, are then presented and demonstrated using data from the Wechsler Intelligence Scale for Children-Third Edition. It is argued that factor score indeterminacy should be routinely assessed and reported as part of any exploratory factor analysis and that factor scores should be thoroughly evaluated before they are reported or used in subsequent statistical analyses.",
"title": ""
}
] |
scidocsrr
|
a45e4478471a02de5681612827486fe9
|
Publication Trends over 10 Years of Computational Thinking Research.
|
[
{
"docid": "8e2407e6fc3e3b3e5f0aeb64eb842712",
"text": "Visual programming in 3D sounds much more appealing than programming in 2D, but what are its benefits? Here, University of Colorado Boulder educators discuss the differences between 2D and 3D regarding three concepts connecting computer graphics to computer science education: ownership, spatial thinking, and syntonicity.",
"title": ""
}
] |
[
{
"docid": "f3641cacf284444ac45f0e085c7214bf",
"text": "Recognition that the entire central nervous system (CNS) is highly plastic, and that it changes continually throughout life, is a relatively new development. Until very recently, neuroscience has been dominated by the belief that the nervous system is hardwired and changes at only a few selected sites and by only a few mechanisms. Thus, it is particularly remarkable that Sir John Eccles, almost from the start of his long career nearly 80 years ago, focused repeatedly and productively on plasticity of many different kinds and in many different locations. He began with muscles, exploring their developmental plasticity and the functional effects of the level of motor unit activity and of cross-reinnervation. He moved into the spinal cord to study the effects of axotomy on motoneuron properties and the immediate and persistent functional effects of repetitive afferent stimulation. In work that combined these two areas, Eccles explored the influences of motoneurons and their muscle fibers on one another. He studied extensively simple spinal reflexes, especially stretch reflexes, exploring plasticity in these reflex pathways during development and in response to experimental manipulations of activity and innervation. In subsequent decades, Eccles focused on plasticity at central synapses in hippocampus, cerebellum, and neocortex. His endeavors extended from the plasticity associated with CNS lesions to the mechanisms responsible for the most complex and as yet mysterious products of neuronal plasticity, the substrates underlying learning and memory. At multiple levels, Eccles' work anticipated and helped shape present-day hypotheses and experiments. He provided novel observations that introduced new problems, and he produced insights that continue to be the foundation of ongoing basic and clinical research. This article reviews Eccles' experimental and theoretical contributions and their relationships to current endeavors and concepts. It emphasizes aspects of his contributions that are less well known at present and yet are directly relevant to contemporary issues.",
"title": ""
},
{
"docid": "e766cd377c223cb3d90272e8c40a54af",
"text": "This paper aims at describing the state of the art on quadratic assignment problems (QAPs). It discusses the most important developments in all aspects of the QAP such as linearizations, QAP polyhedra, algorithms to solve the problem to optimality, heuristics, polynomially solvable special cases, and asymptotic behavior. Moreover, it also considers problems related to the QAP, e.g. the biquadratic assignment problem, and discusses the relationship between the QAP and other well known combinatorial optimization problems, e.g. the traveling salesman problem, the graph partitioning problem, etc. The paper will appear in the Handbook of Combinatorial Optimization to be published by Kluwer Academic Publishers, P. Pardalos and D.-Z. Du, eds.",
"title": ""
},
{
"docid": "10365680ff0a5da9b97727bf40432aae",
"text": "In this paper, we investigate the contextualization of news documents with geographic and visual information. We propose a matrix factorization approach to analyze the location relevance for each news document. We also propose a method to enrich the document with a set of web images. For location relevance analysis, we first perform toponym extraction and expansion to obtain a toponym list from news documents. We then propose a matrix factorization method to estimate the location-document relevance scores while simultaneously capturing the correlation of locations and documents. For image enrichment, we propose a method to generate multiple queries from each news document for image search and then employ an intelligent fusion approach to collect a set of images from the search results. Based on the location relevance analysis and image enrichment, we introduce a news browsing system named NewsMap which can support users in reading news via browsing a map and retrieving news with location queries. The news documents with the corresponding enriched images are presented to help users quickly get information. Extensive experiments demonstrate the effectiveness of our approaches.",
"title": ""
},
{
"docid": "61281a3cf33b46d0026fb88e8303523c",
"text": "http://dx.doi.org/10.1016/j.chb.2015.03.079 0747-5632/ 2015 Elsevier Ltd. All rights reserved. Abbreviations: PAR, parent–adolescent relationship; IGA, internet game addiction; SC, school connectedness; DPA, deviant peer affiliation. ⇑ Corresponding author at: School of Psychology & Center for Studies of Psychological Application, South China Normal University, Guangzhou, Guangdong 510631, PR China. Tel.: +86 20 85216466. E-mail address: zhangwei@scnu.edu.cn (W. Zhang). Jianjun Zhu, Wei Zhang ⇑, Chengfu Yu, Zhenzhou Bao",
"title": ""
},
{
"docid": "e2f57214cd2ec7b109563d60d354a70f",
"text": "Despite the recent successes in machine learning, there remain many open challenges. Arguably one of the most important and interesting open research problems is that of data efficiency. Supervised machine learning models, and especially deep neural networks, are notoriously data hungry, often requiring millions of labeled examples to achieve desired performance. However, labeled data is often expensive or difficult to obtain, hindering advances in interesting and important domains. What avenues might we pursue to increase the data efficiency of machine learning models? One approach is semi-supervised learning. In contrast to labeled data, unlabeled data is often easy and inexpensive to obtain. Semi-supervised learning is concerned with leveraging unlabeled data to improve performance in supervised tasks. Another approach is active learning: in the presence of a labeling mechanism (oracle), how can we choose examples to be labeled in a way that maximizes the gain in performance? In this thesis we are concerned with developing models that enable us to improve data efficiency of powerful models by jointly pursuing both of these approaches. Deep generative models parameterized by neural networks have emerged recently as powerful and flexible tools for unsupervised learning. They are especially useful for modeling high-dimensional and complex data. We propose a deep generative model with a discriminative component. By including the discriminative component in the model, after training is complete the model is used for classification rather than variational approximations. The model further includes stochastic inputs of arbitrary dimension for increased flexibility and expressiveness. We leverage the stochastic layer to learn representations of the data which naturally accommodate semi-supervised learning. We develop an efficient Gibbs sampling procedure to marginalize the stochastic inputs while inferring labels. We extend the model to include uncertainty in the weights, allowing us to explicitly capture model uncertainty, and demonstrate how this allows us to use the model for active learning as well as semi-supervised learning. I would like to dedicate this thesis to my loving wife, parents, and sister . . .",
"title": ""
},
{
"docid": "301fc0a18bec8128165ec73e15e66eb1",
"text": "data structure queries (A). Some queries check properties of abstract data struct [11][131] such as stacks, hash tables, trees, and so on. These queries are not domain because the data structures can hold data of any domain. These queries are also differ the programming construct queries, because they check the constraints of well-defined a data structures. For example, a query about a binary tree may find the number of its nod have only one child. On the other hand, programming construct queries usually span di data structures. Abstract data structure queries can usually be expressed as class invar could be packaged with the class that implements an ADT. However, the queries that p information rather than detect violations are best answered by dynamic queries. For ex monitoring B+ trees using queries may indicate whether this data structure is efficient f underlying problem. Program construct queries (P). Program construct queries verify object relationships that related to the program implementation and not directly to the problem domain. Such q verify and visualize groups of objects that have to conform to some constraints because lower level of program design and implementation. For example, in a graphical user int implementation, every window object has a parent window, and this window referenc children widgets through the widget_collection collection (section 5.2.2). Such construct is n",
"title": ""
},
{
"docid": "2d22631dcbbae408e0856b414c2f7d8e",
"text": "During the past few years, interest in convolutional neural networks (CNNs) has risen constantly, thanks to their excellent performance on a wide range of recognition and classification tasks. However, they suffer from the high level of complexity imposed by the high-dimensional convolutions in convolutional layers. Within scenarios with limited hardware resources and tight power and latency constraints, the high computational complexity of CNNs makes them difficult to be exploited. Hardware solutions have striven to reduce the power consumption using low-power techniques, and to limit the processing time by increasing the number of processing elements (PEs). While most of ASIC designs claim a peak performance of a few hundred giga operations per seconds, their average performance is substantially lower when applied to state-of-the-art CNNs such as AlexNet, VGGNet and ResNet, leading to low resource utilization. Their performance efficiency is limited to less than 55% on average, which leads to unnecessarily high processing latency and silicon area. In this paper, we propose a dataflow which enables to perform both the fully-connected and convolutional computations for any filter/layer size using the same PEs. We then introduce a multi-mode inference engine (MMIE) based on the proposed dataflow. Finally, we show that the proposed MMIE achieves a performance efficiency of more than 84% when performing the computations of the three renown CNNs (i.e., AlexNet, VGGNet and ResNet), outperforming the best architecture in the state-of-the-art in terms of energy consumption, processing latency and silicon area.",
"title": ""
},
{
"docid": "2b0fa1c4dceb94a2d8c1395dae9fad99",
"text": "Among the major problems facing technical management today are those involving the coordination of many diverse activities toward a common goal. In a large engineering project, for example, almost all the engineering and craft skills are involved as well as the functions represented by research, development, design, procurement, construction, vendors, fabricators and the customer. Management must devise plans which will tell with as much accuracy as possible how the efforts of the people representing these functions should be directed toward the project's completion. In order to devise such plans and implement them, management must be able to collect pertinent information to accomplish the following tasks:\n (1) To form a basis for prediction and planning\n (2) To evaluate alternative plans for accomplishing the objective\n (3) To check progress against current plans and objectives, and\n (4) To form a basis for obtaining the facts so that decisions can be made and the job can be done.",
"title": ""
},
{
"docid": "d99181a13ec133373f7fb40f98ea770d",
"text": "Fisting is an uncommon and potentially dangerous sexual practice. This is usually a homosexual activity, but can also be a heterosexual or an autoerotic practice. A systematic review of the forensic literature yielded 14 published studies from 8 countries between 1968 and 2016 that met the inclusion/exclusion criteria, illustrating that external anogenital (anal and/or genital) trauma due to fisting is observed in 22.2% and 88.8% (reported consensual and non-consensual intercourse, respectively) of the subjects, while internal injuries are observed in the totality of the patients. Establishing the reliability of the conclusions of these studies is difficult due to a lack of uniformity in methodology used to detect and define injuries. Taking this limit into account, the aim of this article is to give a description of the external and internal injuries subsequent to reported consensual and non-consensual fisting practice, and try to find a relation between this sexual practice, the morphology of the injuries, the correlation with the use of drugs, and the relationship with assailant, where possible. The findings reported in this paper could be useful, especially when concerns of sexual assault arise.",
"title": ""
},
{
"docid": "7c570bf4961adaa17e8cdd6d6b7e0f68",
"text": "This paper presents a 50-MHz 5-V-input 3-W-output three-level buck converter. A real-time flying capacitor (<inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula>) calibration is proposed to ensure a constant voltage of <inline-formula> <tex-math notation=\"LaTeX\">$V_{g}$ </tex-math></inline-formula>/2 across <inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula>, which is highly dependent on various practical conditions, such as parasitic capacitance, time mismatches, or any loading circuits from <inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula>. The calibration is essential to ensure the reliability and minimize the inductor current and output voltage ripple, thus maintaining the advantages of the three-level operation and further extending the system bandwidth without encountering sub-harmonic oscillation. The converter is fabricated in a UMC 65-nm process using standard 2.5-V I/O devices, and is able to handle a 5-V input voltage and provide a 0.6–4.2-V-wide output range. In the measurement, the voltage across <inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula> is always calibrated to <inline-formula> <tex-math notation=\"LaTeX\">$V_{g}$ </tex-math></inline-formula>/2 under various conditions to release the voltage stress on the high- and low-side power transistors and <inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula>, and to ensure reliability with up to 69% output voltage ripple reduction. A 90% peak efficiency and a 23–29-ns/V reference-tracking response are also observed.",
"title": ""
},
{
"docid": "88c592bdd7bb9c9348545734a9508b7b",
"text": "environments: An introduction C.-S. Li B. L. Brech S. Crowder D. M. Dias H. Franke M. Hogstrom D. Lindquist G. Pacifici S. Pappe B. Rajaraman J. Rao R. P. Ratnaparkhi R. A. Smith M. D. Williams During the past few years, enterprises have been increasingly aggressive in moving mission-critical and performance-sensitive applications to the cloud, while at the same time many new mobile, social, and analytics applications are directly developed and operated on cloud computing platforms. These two movements are encouraging the shift of the value proposition of cloud computing from cost reduction to simultaneous agility and optimization. These requirements (agility and optimization) are driving the recent disruptive trend of software defined computing, for which the entire computing infrastructureVcompute, storage and networkVis becoming software defined and dynamically programmable. The key elements within software defined environments include capability-based resource abstraction, goal-based and policy-based workload definition, and outcome-based continuous mapping of the workload to the available resources. Furthermore, software defined environments provide the tooling and capabilities to compose workloads from existing components that are then continuously and autonomously mapped onto the underlying programmable infrastructure. These elements enable software defined environments to achieve agility, efficiency, and continuous outcome-optimized provisioning and management, plus continuous assurance for resiliency and security. This paper provides an overview and introduction to the key elements and challenges of software defined environments.",
"title": ""
},
{
"docid": "7aa15be00c4fd39beae23e17afc3aa06",
"text": "A class of nonlinear filters called rank conditioned rank selection (RCRS) filters is developed and analyzed in this paper. The RCRS filters are developed within the general framework of rank selection (RS) filters, which are filters constrained to output an order statistic from the observation set. Many previously proposed rank order based filters can be formulated as RS filters. The only difference between such filters is in the information used in deciding which order statistic to output. The information used by RCRS filters is the ranks of selected input samples, hence the name rank conditioned rank selection filters. The number of input sample ranks used is referred to as the order of the RCRS filter. The order can range from zero to the number of samples in the observation window, giving the filters valuable flexibility. Low-order filters can give good performance and are relatively simple to optimize and implement. If improved performance is demanded, the order can be increased but at the expense of filter simplicity. In this paper, many statistical and deterministic properties of the RCRS filters are presented. A procedure for optimizing over the class of RCRS filters is also presented. Finally, extensive computer simulation results that illustrate the performance of RCRS filters in comparison with other techniques in image restoration applications are presented.",
"title": ""
},
{
"docid": "182294b741c94a7921c3308955f2db62",
"text": "In this work, we propose a robust and efficient method to build dense 3D maps, using only the images grabbed by an omnidirectional camera. The map contains exhaustive information about both the structure and the appearance of the environment and it is well suited also for large scale environments.",
"title": ""
},
{
"docid": "49cab61f0bb863e759585da23c9bb96c",
"text": "An UHD AMOLED display driver IC, enabling real-time TFT non-uniformity compensation, is presented with a hybrid driving scheme. The proposed hybrid driving scheme drives a mobile UHD (3840×1920) AMOLED panel, whose scan time is 7.7μs at a scan frequency of 60Hz, through the load of 30kohm resistance and 30pF capacitance. A proposed accurate current sensor embedded in the column driver and a back-end compensation scheme reduce maximum current error between emulated TFTs within 0.94 LSB (37nA) of 8-bit gray scales. Since the TFT variation is externally compensated, a simple 3T1C pixel circuit is employed in each pixel.",
"title": ""
},
{
"docid": "d2a4213c14a439d231f6be8f54c1dc41",
"text": "Asymmetric multi-core architectures integrating cores with diverse power-performance characteristics is emerging as a promising alternative in the dark silicon era where only a fraction of the cores on chip can be powered on due to thermal limits. We introduce a hierarchical power management framework for asymmetric multi-cores that builds on control theory and coordinates multiple controllers in a synergistic manner to achieve optimal power-performance efficiency while respecting the thermal design power budget. We integrate our framework within Linux and implement/evaluate it on real ARM big.LITTLE asymmetric multi-core platform.",
"title": ""
},
{
"docid": "eb0a907ad08990b0fe5e2374079cf395",
"text": "We examine whether tolerance for failure spurs corporate innovation based on a sample of venture capital (VC) backed IPO firms. We develop a novel measure of VC investors’ failure tolerance by examining their tendency to continue investing in a venture conditional on the venture not meeting milestones. We find that IPO firms backed by more failure-tolerant VC investors are significantly more innovative. A rich set of empirical tests shows that this result is not driven by the endogenous matching between failure-tolerant VCs and startups with high exante innovation potentials. Further, we find that the marginal impact of VC failure tolerance on startup innovation varies significantly in the cross section. Being financed by a failure-tolerant VC is much more important for ventures that are subject to high failure risk. Finally, we examine the determinants of the cross-sectional heterogeneity in VC failure tolerance. We find that both capital constraints and career concerns can negatively distort VC failure tolerance. We also show that younger and less experienced VCs are more exposed to these distortions, making them less failure tolerant than more established VCs.",
"title": ""
},
{
"docid": "dd4860e8dfe73c56c7bd30863ca626b4",
"text": "Terrain rendering is an important component of many GIS applications and simulators. Most methods rely on heightmap-based terrain which is simple to acquire and handle, but has limited capabilities for modeling features like caves, steep cliffs, or overhangs. In contrast, volumetric terrain models, e.g. based on isosurfaces can represent arbitrary topology. In this paper, we present a fast, practical and GPU-friendly level of detail algorithm for large scale volumetric terrain that is specifically designed for real-time rendering applications. Our algorithm is based on a longest edge bisection (LEB) scheme. The resulting tetrahedral cells are subdivided into four hexahedra, which form the domain for a subsequent isosurface extraction step. The algorithm can be used with arbitrary volumetric models such as signed distance fields, which can be generated from triangle meshes or discrete volume data sets. In contrast to previous methods our algorithm does not require any stitching between detail levels. It generates crack free surfaces with a good triangle quality. Furthermore, we efficiently extract the geometry at runtime and require no preprocessing, which allows us to render infinite procedural content with low memory",
"title": ""
},
{
"docid": "e471aa752602d6721e69b9213a78f66c",
"text": "Rising chirps that compensate for the dispersion of the travelling wave on the basilar membrane evoke larger monaural brainstem responses than clicks. In order to test if a similar effect applies for the early processing stages of binaural information, monaurally and binaurally evoked auditory brainstem responses were recorded for clicks and chirps for levels from 10 to 60 dB nHL in steps of 10 dB. Ten thousand sweeps were collected for every stimulus condition from 10 normal hearing subjects. Wave V amplitudes are significantly larger for chirps than for clicks for all conditions. The amplitude of the binaural difference potential, DP1-DN1, is significantly larger for chirps at the levels 30 and 40 dB nHL. Both the binaurally evoked potential and the binaural difference potential exhibit steeper growth functions for chirps than for clicks for levels up to 40 dB nHL. For higher stimulation levels the chirp responses saturate approaching the click evoked amplitude. For both stimuli the latency of DP1 is shorter than the latency of the binaural wave V, which in turn is shorter than the latency of DN1. The amplitude ratio of the binaural difference potential to the binaural response is independent of stimulus level for clicks and chirps. A possible interpretation is that with click stimulation predominantly binaural interaction from high frequency regions is seen which is compatible with a processing by contralateral inhibitory and ipsilateral excitatory (IE) cells. Contributions from low frequencies are negligible since the responses from low frequencies are not synchronized for clicks. The improved synchronization at lower frequencies using chirp stimuli yields contributions from both low and high frequency neurons enlarging the amplitudes of the binaural responses as well as the binaural difference potential. Since the constant amplitude ratio of the binaural difference potential to the binaural response makes contralateral and ipsilateral excitatory interaction improbable, binaural interaction at low frequencies is presumably also of the IE type. Another conclusion of this study is that the chirp stimuli employed here are better suited for auditory brainstem responses and binaural difference potentials than click stimuli since they exhibit higher amplitudes and a better signal-to-noise ratio.",
"title": ""
},
{
"docid": "2ddc8da6f045c3ffe7231233b95a31e6",
"text": "The need for software is increasingly growing in the automotive industry. Software development projects are, however, often troubled by time and budget overruns, resulting in systems that do not fulfill customer requirements. Both research and industry lack strategies to combine reducing the long software development lifecycles (as required by time-to-market demands) with increasing the quality of the software developed. Software process improvement (SPI) provides the first step in the move towards software quality, and assessments are a vital part of this process. Unfortunately, software process assessments are often expensive and time consuming. Additionally, they often provide companies with a long list of issues without providing realistic suggestions. The goal of this paper is to describe a new low-overhead assessment method that has been designed specifically for small-to-medium-sized (SMEs) organisations wishing to be automotive software suppliers. This assessment method integrates the structured-ness of the plan-driven SPI models of Capability Maturity Model Integration (CMMI) and Automotive SPICE with the flexibleness of agile practices.",
"title": ""
},
{
"docid": "a5c072d196eed09548acba006b1e4ff6",
"text": "MapReduce is becoming the state-of-the-art computing paradigm for processing large-scale datasets on a large cluster with tens or thousands of nodes. It has been widely used in various fields such as e-commerce, Web search, social networks, and scientific computation. Understanding the characteristics of MapReduce workloads is the key to achieving better configuration decisions and improving the system throughput. However, workload characterization of MapReduce, especially in a large-scale production environment, has not been well studied yet. To gain insight on MapReduce workloads, we collected a two-week workload trace from a 2,000-node Hadoop cluster at Taobao, which is the biggest online e-commerce enterprise in Asia, ranked 14th in the world as reported by Alexa. The workload trace covered 912,157 jobs, logged from Dec. 4 to Dec. 20, 2011. We characterized the workload at the granularity of job and task, respectively and concluded with a set of interesting observations. The results of workload characterization are representative and generally consistent with data platforms for e-commerce websites, which can help other researchers and engineers understand the performance and job characteristics of Hadoop in their production environments. In addition, we use these job analysis statistics to derive several implications for potential performance optimization solutions.",
"title": ""
}
] |
scidocsrr
|
aa5572ece7a5a447157765ce129623d5
|
Detecting Events in Online Social Networks: Definitions, Trends and Challenges
|
[
{
"docid": "72bbd468c00ae45979cce3b771e4c2bf",
"text": "Twitter is a popular microblogging and social networking service with over 100 million users. Users create short messages pertaining to a wide variety of topics. Certain topics are highlighted by Twitter as the most popular and are known as “trending topics.” In this paper, we will outline methodologies of detecting and identifying trending topics from streaming data. Data from Twitter’s streaming API will be collected and put into documents of equal duration. Data collection procedures will allow for analysis over multiple timespans, including those not currently associated with Twitter-identified trending topics. Term frequency-inverse document frequency analysis and relative normalized term frequency analysis are performed on the documents to identify the trending topics. Relative normalized term frequency analysis identifies unigrams, bigrams, and trigrams as trending topics, while term frequcny-inverse document frequency analysis identifies unigrams as trending topics.",
"title": ""
},
{
"docid": "81f82ecbc43653566319c7e04f098aeb",
"text": "Social microblogs such as Twitter and Weibo are experiencing an explosive growth with billions of global users sharing their daily observations and thoughts. Beyond public interests (e.g., sports, music), microblogs can provide highly detailed information for those interested in public health, homeland security, and financial analysis. However, the language used in Twitter is heavily informal, ungrammatical, and dynamic. Existing data mining algorithms require extensive manually labeling to build and maintain a supervised system. This paper presents STED, a semi-supervised system that helps users to automatically detect and interactively visualize events of a targeted type from twitter, such as crimes, civil unrests, and disease outbreaks. Our model first applies transfer learning and label propagation to automatically generate labeled data, then learns a customized text classifier based on mini-clustering, and finally applies fast spatial scan statistics to estimate the locations of events. We demonstrate STED’s usage and benefits using twitter data collected from Latin America countries, and show how our system helps to detect and track example events such as civil unrests and crimes.",
"title": ""
},
{
"docid": "10a517a46708efe948701da2405e02fe",
"text": "User-contributed messages on social media sites such as Twitter have emerged as powerful, real-time means of information sharing on the Web. These short messages tend to reflect a variety of events in real time, making Twitter particularly well suited as a source of real-time event content. In this paper, we explore approaches for analyzing the stream of Twitter messages to distinguish between messages about real-world events and non-event messages. Our approach relies on a rich family of aggregate statistics of topically similar message clusters. Large-scale experiments over millions of Twitter messages show the effectiveness of our approach for surfacing real-world event content on Twitter.",
"title": ""
}
] |
[
{
"docid": "6858c559b78c6f2b5000c22e2fef892b",
"text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.",
"title": ""
},
{
"docid": "a8ca07bf7784d7ac1d09f84ac76be339",
"text": "AbstructEstimation of 3-D information from 2-D image coordinates is a fundamental problem both in machine vision and computer vision. Circular features are the most common quadratic-curved features that have been addressed for 3-D location estimation. In this paper, a closed-form analytical solution to the problem of 3-D location estimation of circular features is presented. Two different cases are considered: 1) 3-D orientation and 3-D position estimation of a circular feature when its radius is known, and 2) 3-D orientation and 3-D position estimation of a circular feature when its radius is not known. As well, extension of the developed method to 3-D quadratic features is addressed. Specifically, a closed-form analytical solution is derived for 3-D position estimation of spherical features. For experimentation purposes, simulated as well as real setups were employed. Simulated experimental results obtained for all three cases mentioned above verified the analytical method developed in this paper. In the case of real experiments, a set of circles located on a calibration plate, whose locations were known with respect to a reference frame, were used for camera calibration as well as for the application of the developed method. Since various distortion factors had to be compensated in order to obtain accurate estimates of the parameters of the imaged circle-an ellipse-with respect to the camera's image frame, a sequential compensation procedure was applied to the input grey-level image. The experimental results obtained once more showed the validity of the total process involved in the 3-D location estimation of circular features in general and the applicability of the analytical method developed in this paper in particular.",
"title": ""
},
{
"docid": "6b70f1ab7f836d5a2681c3f998393ed3",
"text": "FOREST FIRES CAUSE MANY ENVIronmental disasters, creating economical and ecological damage as well as endangering people’s lives. Heightened interest in automatic surveillance and early forest-fire detection has taken precedence over traditional human surveillance because the latter’s subjectivity affects detection reliability, which is the main issue for forest-fire detection systems. In current systems, the process is tedious, and human operators must manually validate many false alarms. Our approach—the False Alarm Reduction system—proposes an alternative realtime infrared–visual system that overcomes this problem. The FAR system consists of applying new infrared-image processing techniques and Artificial Neural Networks (ANNs), using additional information from meteorological sensors and from a geographical information database, taking advantage of the information redundancy from visual and infrared cameras through a matching process, and designing a fuzzy expert rule base to develop a decision function. Furthermore, the system provides the human operator with new software tools to verify alarms.",
"title": ""
},
{
"docid": "38b4622589c606b862fdebbcd14a228e",
"text": "Crowdsourcing is an effective tool for scalable data annotation in both research and enterprise contexts. Due to crowdsourcing’s open participation model, quality assurance is critical to the success of any project. Present methods rely on EM-style post-processing or manual annotation of large gold standard sets. In this paper we present an automated quality assurance process that is inexpensive and scalable. Our novel process relies on programmatic gold creation to provide targeted training feedback to workers and to prevent common scamming scenarios. We find that it decreases the amount of manual work required to manage crowdsourced labor while improving the overall quality of the results.",
"title": ""
},
{
"docid": "617c4da4ce82b2cb5f4d0e6fb61f87b9",
"text": "PURPOSE\nRecent studies have suggested that microRNA biomarkers could be useful for stratifying lung cancer subtypes, but microRNA signatures varied between different populations. Squamous cell carcinoma (SCC) is one major subtype of lung cancer that urgently needs biomarkers to aid patient management. Here, we undertook the first comprehensive investigation on microRNA in Chinese SCC patients.\n\n\nEXPERIMENTAL DESIGN\nMicroRNA expression was measured in cancerous and noncancerous tissue pairs strictly collected from Chinese SCC patients (stages I-III), who had not been treated with chemotherapy or radiotherapy prior to surgery. The molecular targets of proposed microRNA were further examined.\n\n\nRESULTS\nWe identified a 5-microRNA classifier (hsa-miR-210, hsa-miR-182, hsa-miR-486-5p, hsa-miR-30a, and hsa-miR-140-3p) that could distinguish SCC from normal lung tissues. The classifier had an accuracy of 94.1% in a training cohort (34 patients) and 96.2% in a test cohort (26 patients). We also showed that high expression of hsa-miR-31 was associated with poor survival in these 60 SCC patients by Kaplan-Meier analysis (P = 0.007), by univariate Cox analysis (P = 0.011), and by multivariate Cox analysis (P = 0.011). This association was independently validated in a separate cohort of 88 SCC patients (P = 0.008, 0.011, and 0.003 in Kaplan-Meier analysis, univariate Cox analysis, and multivariate Cox analysis, respectively). We then determined that the tumor suppressor DICER1 is a target of hsa-miR-31. Expression of hsa-miR-31 in a human lung cancer cell line repressed DICER1 activity but not PPP2R2A or LATS2.\n\n\nCONCLUSIONS\nOur results identified a new diagnostic microRNA classifier for SCC among Chinese patients and a new prognostic biomarker, hsa-miR-31.",
"title": ""
},
{
"docid": "f2d27b79f1ac3809f7ea605203136760",
"text": "The Internet of Things (IoT) is a fast-growing movement turning devices into always-connected smart devices through the use of communication technologies. This facilitates the creation of smart strategies allowing monitoring and optimization as well as many other new use cases for various sectors. Low Power Wide Area Networks (LPWANs) have enormous potential as they are suited for various IoT applications and each LPWAN technology has certain features, capabilities and limitations. One of these technologies, namely LoRa/LoRaWAN has several promising features and private and public LoRaWANs are increasing worldwide. Similarly, researchers are also starting to study the potential of LoRa and LoRaWANs. This paper examines the work that has already been done and identifies flaws and strengths by performing a comparison of created testbeds. Limitations of LoRaWANs are also identified.",
"title": ""
},
{
"docid": "d56855e068a4524fda44d93ac9763cab",
"text": "greatest cause of mortality from cardiovascular disease, after myocardial infarction and cerebrovascular stroke. From hospital epidemiological data it has been calculated that the incidence of PE in the USA is 1 per 1,000 annually. The real number is likely to be larger, since the condition goes unrecognised in many patients. Mortality due to PE has been estimated to exceed 15% in the first three months after diagnosis. PE is a dramatic and life-threatening complication of deep venous thrombosis (DVT). For this reason, the prevention, diagnosis and treatment of DVT is of special importance, since symptomatic PE occurs in 30% of those affected. If asymptomatic episodes are also included, it is estimated that 50-60% of DVT patients develop PE. DVT and PE are manifestations of the same entity, namely thromboembolic disease. If we extrapolate the epidemiological data from the USA to Greece, which has a population of about ten million, 20,000 new cases of thromboembolic disease may be expected annually. Of these patients, PE will occur in 10,000, of which 6,000 will have symptoms and 900 will die during the first trimester.",
"title": ""
},
{
"docid": "e70c4ad755edef1fbea472e029bd7e22",
"text": "This narrative review examines assessments of the reliability of online health information retrieved through social media to ascertain whether health information accessed or disseminated through social media should be evaluated differently than other online health information. Several medical, library and information science, and interdisciplinary databases were searched using terms relating to social media, reliability, and health information. While social media's increasing role in health information consumption is recognized, studies are dominated by investigations of traditional (i.e., non-social media) sites. To more richly assess constructions of reliability when using social media for health information, future research must focus on health consumers' unique contexts, virtual relationships, and degrees of trust within their social networks.",
"title": ""
},
{
"docid": "c61470e2c1310a9c6fa09dc96659d4ab",
"text": "Selenium IDE Locating Elements There is a great responsibility for developers and testers to ensure that web software exhibits high reliability and speed. Somewhat recently, the software community has seen a rise in the usage of AJAX in web software development to achieve this goal. The advantage of AJAX applications is that they are typically very responsive. The vEOC is an Emergency Management Training application which requires this level of interactivity. Selenium is great in that it is an open source testing tool that can handle the amount of JavaScript present in AJAX applications, and even gives the tester the freedom to add their own features. Since web software is so frequently modified, the main goal for any test developer is to create sustainable tests. How can Selenium tests be made more maintainable?",
"title": ""
},
{
"docid": "bf499e8252cac48cdd406699c8413e16",
"text": "Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. We introduce a method which integrates and reasons relying on information spread within documents and across multiple documents. We frame it as an inference problem on a graph. Mentions of entities are nodes of this graph where edges encode relations between different mentions (e.g., withinand cross-document co-references). Graph convolutional networks (GCNs) are applied to these graphs and trained to perform multi-step reasoning. Our Entity-GCN method is scalable and compact, and it achieves state-of-the-art results on the WIKIHOP dataset (Welbl et al., 2017).",
"title": ""
},
{
"docid": "18969bed489bb9fa7196634a8086449e",
"text": "A speech recognition model is proposed in which the transformation from an input speech signal into a sequence of phonemes is carried out largely through an active or feedback process. In this process, patterns are generated internally in the analyzer according to an adaptable sequence of instructions until a best match with the input signal is obtained. Details of the process are given, and the areas where further research is needed are indicated.",
"title": ""
},
{
"docid": "247eb1c32cf3fd2e7a925d54cb5735da",
"text": "Several applications in machine learning and machine-to-human interactions tolerate small deviations in their computations. Digital systems can exploit this fault-tolerance to increase their energy-efficiency, which is crucial in embedded applications. Hence, this paper introduces a new means of Approximate Computing: Dynamic-Voltage-Accuracy-Frequency-Scaling (DVAFS), a circuit-level technique enabling a dynamic trade-off of energy versus computational accuracy that outperforms other Approximate Computing techniques. The usage and applicability of DVAFS is illustrated in the context of Deep Neural Networks, the current state-of-the-art in advanced recognition. These networks are typically executed on CPU's or GPU's due to their high computational complexity, making their deployment on battery-constrained platforms only possible through wireless connections with the cloud. This work shows how deep learning can be brought to IoT devices by running every layer of the network at its optimal computational accuracy. Finally, we demonstrate a DVAFS processor for Convolutional Neural Networks, achieving efficiencies of multiple TOPS/W.",
"title": ""
},
{
"docid": "c61e5bae4dbccf0381269980a22f726a",
"text": "—Web mining is the application of the data mining which is useful to extract the knowledge. Web mining has been explored to different techniques have been proposed for the variety of the application. Most research on Web mining has been from a 'data-centric' or information based point of view. Web usage mining, Web structure mining and Web content mining are the types of Web mining. Web usage mining is used to mining the data from the web server log files. Web Personalization is one of the areas of the Web usage mining that can be defined as delivery of content tailored to a particular user or as personalization requires implicitly or explicitly collecting visitor information and leveraging that knowledge in your content delivery framework to manipulate what information you present to your users and how you present it. In this paper, we have focused on various Web personalization categories and their research issues.",
"title": ""
},
{
"docid": "1b2e9a6abaa77d0249a8668a71a41002",
"text": "Verilog is a concurrent language Aimed at modeling hardware — optimized for it! Typical of hardware description languages (HDLs), i t: provides for the specification of concurrent activi ties stands on its head to make the activities look like they happened at the same time Why? allows for intricate timing specifications A concurrent language allows for: Multiple concurrent “elements” An event in one element to cause activity in anothe r. (An event is an output or state change at a given time) based on interconnection of the element’s ports Further execution to be delayed until a specific event occurs",
"title": ""
},
{
"docid": "1a77d0a15918f5dda83342c593f857d2",
"text": "Context: With business process modelling, companies and organizations can gain explicit control over their processes. Currently, there are many notations in the area of business process modelling, where Business Process Model and Notation (BPMN) is denoted as the de facto standard. Aims: The aim of this research is to provide the state-of-the-art results addressing the acceptance of BPMN, while also examining the purposes of its usage. Furthermore, the advantages, disadvantages and other interests related to BPMN were also investigated. Method: To achieve these objectives, a Systematic Literature Review (SLR) and a semantic examination of articles’ citations was conducted. Results: After completing SLR, out of a total of 852 articles, 31 were deemed relevant. The majority of the articles analyzed the notation and compared it with other modelling techniques. The remainder evaluated general aspects of the notation, e.g. history and versions of the standard, usage of the notation or tools. Conclusion: Our findings demonstrate that there are empirical insights about the level of BPMN acceptance. They suggest that BPMN is still widely perceived as the de facto standard in the process modelling domain and its usage is everincreasing. However, many studies report that only a limited set of elements are commonly used and to this end, several extensions were proposed. The main purpose of BPMN remains the description of business processes.",
"title": ""
},
{
"docid": "9a1201d68018fce5ce413511dc64e8b7",
"text": "In the health sciences it is quite common to carry out studies designed to determine the influence of one or more variables upon a given response variable. When this response variable is numerical, simple or multiple regression techniques are used, depending on the case. If the response variable is a qualitative variable (dichotomic or polychotomic), as for example the presence or absence of a disease, linear regression methodology is not applicable, and simple or multinomial logistic regression is used, as applicable.",
"title": ""
},
{
"docid": "3deb967a4e683b4a38b9143b105a5f2a",
"text": "BACKGROUND\nThe Brief Obsessive Compulsive Scale (BOCS), derived from the Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) and the children's version (CY-BOCS), is a short self-report tool used to aid in the assessment of obsessive-compulsive symptoms and diagnosis of obsessive-compulsive disorder (OCD). It is widely used throughout child, adolescent and adult psychiatry settings in Sweden but has not been validated up to date.\n\n\nAIM\nThe aim of the current study was to examine the psychometric properties of the BOCS amongst a psychiatric outpatient population.\n\n\nMETHOD\nThe BOCS consists of a 15-item Symptom Checklist including three items (hoarding, dysmorphophobia and self-harm) related to the DSM-5 category \"Obsessive-compulsive related disorders\", accompanied by a single six-item Severity Scale for obsessions and compulsions combined. It encompasses the revisions made in the Y-BOCS-II severity scale by including obsessive-compulsive free intervals, extent of avoidance and excluding the resistance item. 402 adult psychiatric outpatients with OCD, attention-deficit/hyperactivity disorder, autism spectrum disorder and other psychiatric disorders completed the BOCS.\n\n\nRESULTS\nPrincipal component factor analysis produced five subscales titled \"Symmetry\", \"Forbidden thoughts\", \"Contamination\", \"Magical thoughts\" and \"Dysmorphic thoughts\". The OCD group scored higher than the other diagnostic groups in all subscales (P < 0.001). Sensitivities, specificities and internal consistency for both the Symptom Checklist and the Severity Scale emerged high (Symptom Checklist: sensitivity = 85%, specificities = 62-70% Cronbach's α = 0.81; Severity Scale: sensitivity = 72%, specificities = 75-84%, Cronbach's α = 0.94).\n\n\nCONCLUSIONS\nThe BOCS has the ability to discriminate OCD from other non-OCD related psychiatric disorders. The current study provides strong support for the utility of the BOCS in the assessment of obsessive-compulsive symptoms in clinical psychiatry.",
"title": ""
},
{
"docid": "ed3ed757804a423eef8b7394b64a971a",
"text": "This work is part of an eort aimed at developing computer-based systems for language instruction; we address the task of grading the pronunciation quality of the speech of a student of a foreign language. The automatic grading system uses SRI's Decipher continuous speech recognition system to generate phonetic segmentations. Based on these segmentations and probabilistic models we produce dierent pronunciation scores for individual or groups of sentences that can be used as predictors of the pronunciation quality. Dierent types of these machine scores can be combined to obtain a better prediction of the overall pronunciation quality. In this paper we review some of the bestperforming machine scores and discuss the application of several methods based on linear and nonlinear mapping and combination of individual machine scores to predict the pronunciation quality grade that a human expert would have given. We evaluate these methods in a database that consists of pronunciation-quality-graded speech from American students speaking French. With predictors based on spectral match and on durational characteristics, we ®nd that the combination of scores improved the prediction of the human grades and that nonlinear mapping and combination methods performed better than linear ones. Characteristics of the dierent nonlinear methods studied are discussed. Ó 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "cac0de9be06166653af16275a9b54878",
"text": "Community-based question answering(CQA) services have arisen as a popular knowledge sharing pattern for netizens. With abundant interactions among users, individuals are capable of obtaining satisfactory information. However, it is not effective for users to attain answers within minutes. Users have to check the progress over time until the satisfying answers submitted. We address this problem as a user personalized satisfaction prediction task. Existing methods usually exploit manual feature selection. It is not desirable as it requires careful design and is labor intensive. In this paper, we settle this issue by developing a new multiple instance deep learning framework. Specifically, in our settings, each question follows a weakly supervised learning (multiple instance learning) assumption, where its obtained answers can be regarded as instance sets and we define the question resolved with at least one satisfactory answer. We thus design an efficient framework exploiting multiple instance learning property with deep learning tactic to model the question-answer pairs relevance and rank the asker’s satisfaction possibility. Extensive experiments on large-scale datasets from Stack Exchange demonstrate the feasibility of our proposed framework in predicting askers personalized satisfaction. Our framework can be extended to numerous applications such as UI satisfaction Prediction, multi-armed bandit problem, expert finding and so on.",
"title": ""
},
{
"docid": "a9e856d2c3bb69df289abf26ce6f178c",
"text": "A novel hybrid method coupling genetic programming and orthogonal least squares, called GP/OLS, was employed to derive new ground-motion prediction equations (GMPEs). The principal ground-motion parameters formulated were peak ground acceleration (PGA), peak ground velocity (PGV) and peak ground displacement (PGD). The proposed GMPEs relate PGA, PGV and PGD to different seismic parameters including earthquake magnitude, earthquake source to site distance, average shear-wave velocity, and faulting mechanisms. The equations were established based on an extensive database of strong ground-motion recordings released by Pacific Earthquake Engineering Research Center (PEER). For more validity verification, the developed equations were employed to predict the ground-motion parameters of the Iranian plateau earthquakes. A sensitivity analysis was carried out to determine the contributions of the parameters affecting PGA, PGV and PGD. The sensitivity of the models to the variations of the influencing parameters was further evaluated through a parametric analysis. The obtained GMPEs are effectively capable of estimating the site ground-motion parameters. The equations provide a prediction performance better than or comparable with the attenuation relationships found in the literature. The derived GMPEs are remarkably simple and straightforward and can reliably be used for the pre-design purposes. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
9848260aa11d1be357f5405bafa249e5
|
A Hidden Markov Model based driver intention prediction system
|
[
{
"docid": "aebfb6cb70de64636647141e6a49d37c",
"text": "Classifying other agents’ intentions is a very complex task but it can be very essential in assisting (autonomous or human) agents in navigating safely in dynamic and possibly hostile environments. This paper introduces a classification approach based on support vector machines and Bayesian filtering (SVM-BF). It then applies it to a road intersection problem to assist a vehicle in detecting the intention of an approaching suspicious vehicle. The SVM-BF approach achieved very promising results.",
"title": ""
}
] |
[
{
"docid": "d53db1dc155c983399a812bbfffa1fb1",
"text": "We present a framework combining hierarchical and multi-agent deep reinforcement learning approaches to solve coordination problems among a multitude of agents using a semi-decentralized model. The framework extends the multi-agent learning setup by introducing a meta-controller that guides the communication between agent pairs, enabling agents to focus on communicating with only one other agent at any step. This hierarchical decomposition of the task allows for efficient exploration to learn policies that identify globally optimal solutions even as the number of collaborating agents increases. We show promising initial experimental results on a simulated distributed scheduling problem.",
"title": ""
},
{
"docid": "6a624b97d996372de9f385798e02d2df",
"text": "Due to the continuous increase of the world population living in cities, it is crucial to identify strategic plans and perform associated actions to make cities smarter, i.e., more operationally efficient, socially friendly, and environmentally sustainable, in a cost effective manner. To achieve these goals, emerging smart cities need to be optimally and intelligently measured, monitored, and managed. In this context the paper proposes the development of a framework for classifying performance indicators of a smart city. It is based on two dimensions: the degree of objectivity of observed variables and the level of technological advancement for data collection. The paper shows an application of the presented framework to the case of the Bari municipality (Italy).",
"title": ""
},
{
"docid": "638f3bcc6df54c3c7efe13897309b9d3",
"text": "During document scanning, skew is inevitably introduced into the incoming document image. Since the algorithms for layout analysis and character recognition are generally very sensitive to the page skew, skew detection and correction in document images are the critical steps before layout analysis. In this paper, a novel skew detection method based on straight-line fitting is proposed. And a concept of eigen-point is introduced. After the relations between the neighboring eigen-points in every text line within a suitable sub-region were analyzed, the eigen-points most possibly laid on the baselines are selected as samples for the straight-line fitting. The average of these baseline directions is computed, which corresponds to the degree of skew of the whole document image. Then a fast skew correction method based on the scanning line model is also presented. Experiments prove that the proposed approaches are fast and accurate.",
"title": ""
},
{
"docid": "13e2b22875e1a23e9e8ea2f80671c74e",
"text": "This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.",
"title": ""
},
{
"docid": "6340d8d3ad4ee479bc783e376b202c89",
"text": "Model distillation is an effective and widely used technique to transfer knowledge from a teacher to a student network. The typical application is to transfer from a powerful large network or ensemble to a small network, in order to meet the low-memory or fast execution requirements. In this paper, we present a deep mutual learning (DML) strategy. Different from the one-way transfer between a static pre-defined teacher and a student in model distillation, with DML, an ensemble of students learn collaboratively and teach each other throughout the training process. Our experiments show that a variety of network architectures benefit from mutual learning and achieve compelling results on both category and instance recognition tasks. Surprisingly, it is revealed that no prior powerful teacher network is necessary - mutual learning of a collection of simple student networks works, and moreover outperforms distillation from a more powerful yet static teacher.",
"title": ""
},
{
"docid": "2531d8d05d262c544a25dbffb7b43d67",
"text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.",
"title": ""
},
{
"docid": "3a497f0634a56ba975948d8bd18e8af8",
"text": "In this paper we evaluate the WER improvement from modeling pronunciation probabilities and word-specific silence probabilities in speech recognition. We do this in the context of Finite State Transducer (FST)-based decoding, where pronunciation and silence probabilities are encoded in the lexicon (L) transducer. We describe a novel way to model word-dependent silence probabilities, where in addition to modeling the probability of silence following each individual word, we also model the probability of each word appearing after silence. All of these probabilities are estimated from aligned training data, with suitable smoothing. We conduct our experiments on four commonly used automatic speech recognition datasets, namely Wall Street Journal, Switchboard, TED-LIUM, and Librispeech. The improvement from modeling pronunciation and silence probabilities is small but fairly consistent across datasets.",
"title": ""
},
{
"docid": "d6a9ce50b87c8dbacaac24e736d41175",
"text": "The main purpose of this paper was to present single channel EEG records and heart rate (HR) changes during shooting routine of 8 experienced archers. Possible differences between recurve and compound shooters in named values were investigated in accordance to arrow score. Additional contribution of this study was systematical review of psychophysiological studies done in archery. Descriptive statistics revealed that compound shooters achieved higher arrow score values, had higher heart rate values pre, during and post shooting, had higher attention values pre, during and post shooting and very similar meditation values pre, during and post shooting according to recurve shooters. ANOVA showed significant differences (p<0,01) between compound shooters and recurve shooters in variables of arrow score, all heart rate and attention level variables, except ones concerning meditation levels. Overall, the obtained results were interesting and can serve as a starting ground for future experiments in order to reach valid and concrete biofeedback data that will support archery excellence.",
"title": ""
},
{
"docid": "4a69a0c5c225d9fbb40373aebaeb99be",
"text": "The hyperlink structure of Wikipedia constitutes a key resource for many Natural Language Processing tasks and applications, as it provides several million semantic annotations of entities in context. Yet only a small fraction of mentions across the entire Wikipedia corpus is linked. In this paper we present the automatic construction and evaluation of a Semantically Enriched Wikipedia (SEW) in which the overall number of linked mentions has been more than tripled solely by exploiting the structure of Wikipedia itself and the wide-coverage sense inventory of BabelNet. As a result we obtain a sense-annotated corpus with more than 200 million annotations of over 4 million different concepts and named entities. We then show that our corpus leads to competitive results on multiple tasks, such as Entity Linking and Word Similarity.",
"title": ""
},
{
"docid": "e83ae69dea6d34e169fc34c64d33ee93",
"text": "Topic models have the potential to improve search and browsing by extracting useful semantic themes from web pages and other text documents. When learned topics are coherent and interpretable, they can be valuable for faceted browsing, results set diversity analysis, and document retrieval. However, when dealing with small collections or noisy text (e.g. web search result snippets or blog posts), learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models. Our regularizers work by creating a structured prior over words that reflect broad patterns in the external data. Using thirteen datasets we show that both regularizers improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more useful across a broader range of text data.",
"title": ""
},
{
"docid": "f97244b3ca9641b43dc4f4592e30f48b",
"text": "In many real applications of machine learning and data mining, we are often confronted with high-dimensional data. How to cluster high-dimensional data is still a challenging problem due to the curse of dimensionality. In this paper, we try to address this problem using joint dimensionality reduction and clustering. Different from traditional approaches that conduct dimensionality reduction and clustering in sequence, we propose a novel framework referred to as discriminative embedded clustering which alternates them iteratively. Within this framework, we are able not only to view several traditional approaches and reveal their intrinsic relationships, but also to be stimulated to develop a new method. We also propose an effective approach for solving the formulated nonconvex optimization problem. Comprehensive analyses, including convergence behavior, parameter determination, and computational complexity, together with the relationship to other related approaches, are also presented. Plenty of experimental results on benchmark data sets illustrate that the proposed method outperforms related state-of-the-art clustering approaches and existing joint dimensionality reduction and clustering methods.",
"title": ""
},
{
"docid": "eb32ce661a0d074ce90861793a2e4de7",
"text": "A new transfer function from control voltage to duty cycle, the closed-current loop, which captures the natural sampling effect is used to design a controller for the voltage-loop of a pulsewidth modulated (PWM) dc-dc converter operating in continuous-conduction mode (CCM) with peak current-mode control (PCM). This paper derives the voltage loop gain and the closed-loop transfer function from reference voltage to output voltage. The closed-loop transfer function from the input voltage to the output voltage, or the closed-loop audio-susceptibility is derived. The closed-loop transfer function from output current to output voltage, or the closed loop output impedance is also derived. The derivation is performed using an averaged small-signal model of the example boost converter for CCM. Experimental verification is presented. The theoretical and experimental results were in good agreement, confirming the validity of the transfer functions derived.",
"title": ""
},
{
"docid": "53bbb6d5467574af4533607c95505ee4",
"text": "The synthesis of genetics-based machine learning and fuzzy logic is beginning to show promise as a potent tool in solving complex control problems in multi-variate non-linear systems. In this paper an overview of current research applying the genetic algorithm to fuzzy rule based control is presented. A novel approach to genetics-based machine learning of fuzzy controllers, called a Pittsburgh Fuzzy Classifier System # 1 (P-FCS1) is proposed. P-FCS1 is based on the Pittsburgh model of learning classifier systems and employs variable length rule-sets and simultaneously evolves fuzzy set membership functions and relations. A new crossover operator which respects the functional linkage between fuzzy rules with overlapping input fuzzy set membership functions is introduced. Experimental results using P-FCS l are reported and compared with other published results. Application of P-FCS1 to a distributed control problem (dynamic routing in computer networks) is also described and experimental results are presented.",
"title": ""
},
{
"docid": "331df0bd161470558dd5f5061d2b1743",
"text": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.",
"title": ""
},
{
"docid": "6ee8bacdf12e951d980963010570b58d",
"text": "Several datasets have been annotated and published for classification of emotions. They differ in several ways: (1) the use of different annotation schemata (e. g., discrete label sets, including joy, anger, fear, or sadness or continuous values including valence, or arousal), (2) the domain, and, (3) the file formats. This leads to several research gaps: supervised models often only use a limited set of available resources. Additionally, no previous work has compared emotion corpora in a systematic manner. We aim at contributing to this situation with a survey of the datasets, and aggregate them in a common file format with a common annotation schema. Based on this aggregation, we perform the first cross-corpus classification experiments in the spirit of future research enabled by this paper, in order to gain insight and a better understanding of differences of models inferred from the data. This work also simplifies the choice of the most appropriate resources for developing a model for a novel domain. One result from our analysis is that a subset of corpora is better classified with models trained on a different corpus. For none of the corpora, training on all data altogether is better than using a subselection of the resources. Our unified corpus is available at http://www.ims.uni-stuttgart.de/data/unifyemotion. Title and Abstract in German Eine Analyse von annotierten Korpora zur Emotionsklassifizierung in Text Es existieren bereits verschiedene Textkorpora, welche zur Erstellung von Modellen für die automatische Emotionsklassifikation erstellt wurden. Sie unterscheiden sich (1) in den unterschiedlichen Annotationsschemata (z.B. diskrete Klassen wie Freude, Wut, Angst, Trauer oder kontinuierliche Werte wie Valenz und Aktivierung), (2) in der Domäne, und, auf einer technischen Ebene, (3) in den Dateiformaten. Dies führt dazu, dass überwacht erstellte Modelle typischerweise nur einen Teil der verfügbaren Ressourcen nutzen sowie kein systematischer Vergleich der Korpora existiert. Hier setzt unsere Arbeit mit einem Überblick der verfügbaren Datensätze an, welche wir in ein gemeinsames Format mit einem einheitlichen Annotationsschema konvertieren. Darauf aufbauend führen wir erste Experimente durch, in dem wir auf Teilmengen der Korpora trainieren und auf anderen testen. Dies steht im Sinne zukünftiger, durch unsere Arbeit ermöglichten Analysen, die Unterschiede zwischen den Annotationen besser zu verstehen. Des Weiteren vereinfacht dies die Wahl einer angemessenen Ressource für die Erstellung von Modellen für eine neue Domäne. Wir zeigen unter anderem, dass die Vorhersagen für einige Korpora besser funktioniert, wenn ein Modell auf einer anderen Ressource trainiert wird. Weiterhin ist für kein Korpus die Vorhersage am besten, wenn alle Daten vereint werden. Unser aggregiertes Korpus ist verfügbar unter http://www.ims.uni-stuttgart.de/data/unifyemotion. This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/",
"title": ""
},
{
"docid": "3665a82c20eb55c8afd2c7f35b68f49f",
"text": "The formulation and delivery of biopharmaceutical drugs, such as monoclonal antibodies and recombinant proteins, poses substantial challenges owing to their large size and susceptibility to degradation. In this Review we highlight recent advances in formulation and delivery strategies — such as the use of microsphere-based controlled-release technologies, protein modification methods that make use of polyethylene glycol and other polymers, and genetic manipulation of biopharmaceutical drugs — and discuss their advantages and limitations. We also highlight current and emerging delivery routes that provide an alternative to injection, including transdermal, oral and pulmonary delivery routes. In addition, the potential of targeted and intracellular protein delivery is discussed.",
"title": ""
},
{
"docid": "c819f88b02d0c1fad58a3821ab682cb4",
"text": "The seminal work of Gatys et al. demonstrated the power of Convolutional Neural Networks (CNNs) in creating artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). Since then, NST has become a trending topic both in academic literature and industrial applications. It is receiving increasing attention and a variety of approaches are proposed to either improve or extend the original NST algorithm. In this paper, we aim to provide a comprehensive overview of the current progress towards NST. We first propose a taxonomy of current algorithms in the field of NST. Then, we present several evaluation methods and compare different NST algorithms both qualitatively and quantitatively. The review concludes with a discussion of various applications of NST and open problems for future research. A list of papers discussed in this review, corresponding codes, pre-trained models and more comparison results are publicly available at: https://github.com/ycjing/Neural-Style-Transfer-Papers.",
"title": ""
},
{
"docid": "61b7275a150b34cf9a0585bdedd22106",
"text": "The objective of knowledge graph embedding is to encode both entities and relations of knowledge graphs into continuous low-dimensional vector spaces. Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well. In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities. Specifically, we introduce three neural models to encode the valuable information from text description of entity, among which an attentive model can select related information as needed. Then, a gating mechanism is applied to integrate representations of structure and text into a unified architecture. Experiments show that our models outperform baseline by margin on link prediction and triplet classification tasks. Source codes of this paper will be available on Github.",
"title": ""
},
{
"docid": "45cff09810b8741d8be1010aa6ff3000",
"text": "This paper discusses experience in applying time harmonic three-dimensional (3D) finite element (FE) analysis in analyzing an axial-flux (AF) solid-rotor induction motor (IM). The motor is a single rotor - single stator AF IM. The construction presented in this paper has not been analyzed before in any technical documents. The field analysis and the comparison of torque calculation results of the 3D calculations with measured torque results are presented",
"title": ""
},
{
"docid": "cbaff0ba24a648e8228a7663e3d32e97",
"text": "Microservice architecture has started a new trend for application development/deployment in cloud due to its flexibility, scalability, manageability and performance. Various microservice platforms have emerged to facilitate the whole software engineering cycle for cloud applications from design, development, test, deployment to maintenance. In this paper, we propose a performance analytical model and validate it by experiments to study the provisioning performance of microservice platforms. We design and develop a microservice platform on Amazon EC2 cloud using Docker technology family to identify important elements contributing to the performance of microservice platforms. We leverage the results and insights from experiments to build a tractable analytical performance model that can be used to perform what-if analysis and capacity planning in a systematic manner for large scale microservices with minimum amount of time and cost.",
"title": ""
}
] |
scidocsrr
|
738ad6fc2445fb8aae0654c61af4fced
|
Crawling Facebook for social network analysis purposes
|
[
{
"docid": "7c1691fd1140b3975b61f8e2ce3dcd9b",
"text": "In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.",
"title": ""
},
{
"docid": "29e5d267bebdeb2aa22b137219b4407e",
"text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.",
"title": ""
}
] |
[
{
"docid": "0412316a818c74190d5b6752e349d84f",
"text": "A new unsupervised feature selection method, i.e., Robust Unsupervised Feature Selection (RUFS), is proposed. Unlike traditional unsupervised feature selection methods, pseudo cluster labels are learned via local learning regularized robust nonnegative matrix factorization. During the label learning process, feature selection is performed simultaneously by robust joint l2,1 norms minimization. Since RUFS utilizes l2,1 norm minimization on processes of both label learning and feature learning, outliers and noise could be effectively handled and redundant or noisy features could be effectively reduced. Our method adopts the advantages of robust nonnegative matrix factorization, local learning, and robust feature learning. In order to make RUFS be scalable, we design a (projected) limited-memory BFGS based iterative algorithm to efficiently solve the optimization problem of RUFS in terms of both memory consumption and computation complexity. Experimental results on different benchmark real world datasets show the promising performance of RUFS over the state-of-the-arts.",
"title": ""
},
{
"docid": "ec21bcaf4fd5cd3d21ddec684fd7dbd1",
"text": "Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds r F and r L for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning r F and r L showed that there is no optimal choice, but r = r F = r L is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.",
"title": ""
},
{
"docid": "8a290f2a7549bbe7d852403924ee8519",
"text": "In this paper we describe a heavily constrained university timetabling problem, and our genetic algorithm based approach to solve it. A problem-specific chromosome representation and knowledge-augmented genetic operators have been developed; these operators ‘ intelli gently’ avoid building ill egal timetables. The prototype timetabling system which is presented has been implemented in C and PROLOG, and includes an interactive graphical user interface. Tests with real data from our university were performed and yield promising results.",
"title": ""
},
{
"docid": "c0f11031f78044075e6e798f8f10e43f",
"text": "We investigate the problem of personalized reviewbased rating prediction which aims at predicting users’ ratings for items that they have not evaluated by using their historical reviews and ratings. Most of existing methods solve this problem by integrating topic model and latent factor model to learn interpretable user and items factors. However, these methods cannot utilize word local context information of reviews. Moreover, it simply restricts user and item representations equivalent to their review representations, which may bring some irrelevant information in review text and harm the accuracy of rating prediction. In this paper, we propose a novel Collaborative Multi-Level Embedding (CMLE) model to address these limitations. The main technical contribution of CMLE is to integrate word embedding model with standard matrix factorization model through a projection level. This allows CMLE to inherit the ability of capturing word local context information from word embedding model and relax the strict equivalence requirement by projecting review embedding to user and item embeddings. A joint optimization problem is formulated and solved through an efficient stochastic gradient ascent algorithm. Empirical evaluations on real datasets show CMLE outperforms several competitive methods and can solve the two limitations well.",
"title": ""
},
{
"docid": "de3789fe0dccb53fe8555e039fde1bc6",
"text": "Estimating consumer surplus is challenging because it requires identification of the entire demand curve. We rely on Uber’s “surge” pricing algorithm and the richness of its individual level data to first estimate demand elasticities at several points along the demand curve. We then use these elasticity estimates to estimate consumer surplus. Using almost 50 million individuallevel observations and a regression discontinuity design, we estimate that in 2015 the UberX service generated about $2.9 billion in consumer surplus in the four U.S. cities included in our analysis. For each dollar spent by consumers, about $1.60 of consumer surplus is generated. Back-of-the-envelope calculations suggest that the overall consumer surplus generated by the UberX service in the United States in 2015 was $6.8 billion.",
"title": ""
},
{
"docid": "258c90fe18f120a24d8132550ed85a6e",
"text": "Based on the thorough analysis of the literature, Chap. 1 introduces readers with challenges of STEM-driven education in general and those challenges caused by the use of this paradigm in computer science (CS) education in particular. This analysis enables to motivate our approach we discuss throughout the book. Chapter 1 also formulates objectives, research agenda and topics this book addresses. The objectives of the book are to discuss the concepts and approaches enabling to transform the current CS education paradigm into the STEM-driven one at the school and, to some extent, at the university. We seek to implement this transformation through the integration of the STEM pedagogy, the smart content and smart devices and educational robots into the smart STEM-driven environment, using reuse-based approaches taken from software engineering and CS.",
"title": ""
},
{
"docid": "b401c0a7209d98aea517cf0e28101689",
"text": "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"title": ""
},
{
"docid": "15d7279c9bb80181d0075425b5f4516d",
"text": "Although the radio access network (RAN) part of mobile networks offers a significant opportunity for benefiting from the use of SDN ideas, this opportunity is largely untapped due to the lack of a software-defined RAN (SD-RAN) platform. We fill this void with FlexRAN, a flexible and programmable SD-RAN platform that separates the RAN control and data planes through a new, custom-tailored southbound API. Aided by virtualized control functions and control delegation features, FlexRAN provides a flexible control plane designed with support for real-time RAN control applications, flexibility to realize various degrees of coordination among RAN infrastructure entities, and programmability to adapt control over time and easier evolution to the future following SDN/NFV principles. We implement FlexRAN as an extension to a modified version of the OpenAirInterface LTE platform, with evaluation results indicating the feasibility of using FlexRAN under the stringent time constraints posed by the RAN. To demonstrate the effectiveness of FlexRAN as an SD-RAN platform and highlight its applicability for a diverse set of use cases, we present three network services deployed over FlexRAN focusing on interference management, mobile edge computing and RAN sharing.",
"title": ""
},
{
"docid": "5fba6770fef320c6e7dee2c848a0a503",
"text": "Person re-identification (Re-ID) aims at recognizing the same person from images taken across different cameras. To address this task, one typically requires a large amount labeled data for training an effective Re-ID model, which might not be practical for real-world applications. To alleviate this limitation, we choose to exploit a sufficient amount of pre-existing labeled data from a different (auxiliary) dataset. By jointly considering such an auxiliary dataset and the dataset of interest (but without label information), our proposed adaptation and re-identification network (ARN) performs unsupervised domain adaptation, which leverages information across datasets and derives domain-invariant features for Re-ID purposes. In our experiments, we verify that our network performs favorably against state-of-the-art unsupervised Re-ID approaches, and even outperforms a number of baseline Re-ID methods which require fully supervised data for training.",
"title": ""
},
{
"docid": "3e43ee5513a0bd8bea8b1ea5cf8cefec",
"text": "Hans-Juergen Boehm Computer Science Department, Rice University, Houston, TX 77251-1892, U.S.A. Mark Weiser Xerox Corporation, Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304, U.S.A. A later version of this paper appeared in Software Practice and Experience 18, 9, pp. 807-820. Copyright 1988 by John Wiley and Sons, Ld. The publishers rules appear to allow posting of preprints, but only on the author’s web site.",
"title": ""
},
{
"docid": "6c41dc25f8d63da094732fd54a8497ff",
"text": "Robotics systems are complex, often consisted of basic services including SLAM for localization and mapping, Convolution Neural Networks for scene understanding, and Speech Recognition for user interaction, etc. Meanwhile, robots are mobile and usually have tight energy constraints, integrating these services onto an embedded platform with around 10 W of power consumption is critical to the proliferation of mobile robots. In this paper, we present a case study on integrating real-time localization, vision, and speech recognition services on a mobile SoC, Nvidia Jetson TX1, within about 10 W of power envelope. In addition, we explore whether offloading some of the services to cloud platform can lead to further energy efficiency while meeting the real-time requirements.",
"title": ""
},
{
"docid": "0250d6bb0bcf11ca8af6c2661c1f7f57",
"text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.",
"title": ""
},
{
"docid": "df398d8757016c51a4ef773a9ffe6738",
"text": "A new approach based on Bayesian networks for traffic flow forecasting is proposed. In this paper, traffic flows among adjacent road links in a transportation network are modeled as a Bayesian network. The joint probability distribution between the cause nodes (data utilized for forecasting) and the effect node (data to be forecasted) in a constructed Bayesian network is described as a Gaussian mixture model (GMM) whose parameters are estimated via the competitive expectation maximization (CEM) algorithm. Finally, traffic flow forecasting is performed under the criterion of minimum mean square error (mmse). The approach departs from many existing traffic flow forecasting models in that it explicitly includes information from adjacent road links to analyze the trends of the current link statistically. Furthermore, it also encompasses the issue of traffic flow forecasting when incomplete data exist. Comprehensive experiments on urban vehicular traffic flow data of Beijing and comparisons with several other methods show that the Bayesian network is a very promising and effective approach for traffic flow modeling and forecasting, both for complete data and incomplete data",
"title": ""
},
{
"docid": "3c84d6a35e2d05d9c0000028554fa780",
"text": "Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to \"absorb\" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers, hindering many applications such as image and speech recognition on mobile phones and other devices. In this paper, we present a novel net- work architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected layers of a deep learning model, leading to dramatic savings in memory and storage consumption. Based on the key observation that the weights of learned convolutional filters are typically smooth and low-frequency, we first convert filter weights to the frequency domain with a discrete cosine transform (DCT) and use a low-cost hash function to randomly group frequency parameters into hash buckets. All parameters assigned the same hash bucket share a single value learned with standard back-propagation. To further reduce model size, we allocate fewer hash buckets to high-frequency components, which are generally less important. We evaluate FreshNets on eight data sets, and show that it leads to better compressed performance than several relevant baselines.",
"title": ""
},
{
"docid": "a7c657ca96b8ad0c79fe3a415654bfd7",
"text": "In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used.",
"title": ""
},
{
"docid": "a3333555cb07907594822d209098d5e4",
"text": "In this paper, we provide a logical formalization of the emotion triggering process and of its relationship with mental attitudes, as described in Ortony, Clore, and Collins’s theory. We argue that modal logics are particularly adapted to represent agents’ mental attitudes and to reason about them, and use a specific modal logic that we call Logic of Emotions in order to provide logical definitions of all but two of their 22 emotions. While these definitions may be subject to debate, we show that they allow to reason about emotions and to draw interesting conclusions from the theory.",
"title": ""
},
{
"docid": "62fa3b06e4fe2e0ac47efc991bbe612e",
"text": "Drones are increasingly flying in sensitive airspace where their presence may cause harm, such as near airports, forest fires, large crowded events, secure buildings, and even jails. This problem is likely to expand given the rapid proliferation of drones for commerce, monitoring, recreation, and other applications. A cost-effective detection system is needed to warn of the presence of drones in such cases. In this paper, we explore the feasibility of inexpensive RF-based detection of the presence of drones. We examine whether physical characteristics of the drone, such as body vibration and body shifting, can be detected in the wireless signal transmitted by drones during communication. We consider whether the received drone signals are uniquely differentiated from other mobile wireless phenomena such as cars equipped with Wi- Fi or humans carrying a mobile phone. The sensitivity of detection at distances of hundreds of meters as well as the accuracy of the overall detection system are evaluated using software defined radio (SDR) implementation.",
"title": ""
},
{
"docid": "4d46ad963c988acc6b6901d228dc68c3",
"text": "OBJECTIVE\nTo establish whether primipaternity and duration of unprotected sexual cohabitation is associated with an increased risk of pre-eclampsia.\n\n\nMETHOD\nAt a tertiary referral center, the study had a case and control group of 60 multigravid women each, as well as a case and control group of 50 primigravid women each. Information was compiled by means of a confidential questionnaire.\n\n\nRESULT\nAfter multiple logistic regression analysis using age, smoking, hypertension in previous pregnancies, change of paternity and duration of unprotected sexual cohabitation as predictors, the regression coefficients for change of paternity and sexual cohabitation of longer than 6 months in multigravid women were -0.4 (P = 0.15) and -1.4 (P = 0.03), respectively.\n\n\nCONCLUSION\nMultigravid women with a period of unprotected sexual cohabitation of longer than 6 months had a decreased risk of pre-eclampsia. Primipaternity was not a significant risk factor for pre-eclampsia.",
"title": ""
},
{
"docid": "7325b97ff3503fab4795715a34c788bc",
"text": "In recent years, modeling data in graph structure became evident and effective for processing in some of the prominent application areas like social analytics, health care analytics, scientific analytics etc. The key sources of massively scaled data are petascale simulations, experimental devices, the internet and scientific applications. Hence, there is a demand for adapt graph querying techniques on such large graph data. Graphs are pervasive in large scale analytics, facing the new challenge such as data size, heterogeneity, uncertainty and data quality. Traditional graph pattern matching approaches are based on inherent isomorphism and simulation. In real life applications, many of them either fail to capture structural or semantic or both similarities. Moreover, in real life applications data graphs constantly bear modifications with small updates. In response to these challenges, we propose a notion that revises traditional notions to characterize graph pattern matching using graph views. Based on this characterization, we outline an approach that efficiently solve graph pattern queries problem over both static and dynamic real life data graphs.",
"title": ""
},
{
"docid": "da607ab67cb9c1e1d08a70b15f9470d7",
"text": "Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present ContextAware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from https://github.com/ thunlp/CANE.",
"title": ""
}
] |
scidocsrr
|
93641e34ffb13c421bbf4df223b16578
|
The identification of Noteworthy Hotel Reviews for Hotel Management
|
[
{
"docid": "5f366ed9a90448be28c1ec9249b4ec96",
"text": "With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.",
"title": ""
},
{
"docid": "455a6fe5862e3271ac00057d1b569b11",
"text": "Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.",
"title": ""
}
] |
[
{
"docid": "f6358e2594782b1cedb5d6007d6c9808",
"text": "A wealth of recent research involves generating program monitors from declarative specifications. Doing this efficiently has proved challenging, and available implementations often produce infeasibly slow monitors. We demonstrate how to dramatically improve performance -- typically reducing overheads to within an order of magnitude of the program's normal runtime.",
"title": ""
},
{
"docid": "3180f7bd813bcd64065780bc9448dc12",
"text": "This paper reports on email classification and filtering, more specifically on spam versus ham and phishing versus spam classification, based on content features. We test the validity of several novel statistical feature extraction methods. The methods rely on dimensionality reduction in order to retain the most informative and discriminative features. We successfully test our methods under two schemas. The first one is a classic classification scenario using a 10-fold cross-validation technique for several corpora, including four ground truth standard corpora: Ling-Spam, SpamAssassin, PU1, and a subset of the TREC 2007 spam corpus, and one proprietary corpus. In the second schema, we test the anticipatory properties of our extracted features and classification models with two proprietary datasets, formed by phishing and spam emails sorted by date, and with the public TREC 2007 spam corpus. The contributions of our work are an exhaustive comparison of several feature selection and extraction methods in the frame of email classification on different benchmarking corpora, and the evidence that especially the technique of biased discriminant analysis offers better discriminative features for the classification, gives stable classification results notwithstanding the amount of features chosen, and robustly retains their discriminative value over time and data setups. These findings are especially useful in a commercial setting, where short profile rules are built based on a limited number of features for filtering emails.",
"title": ""
},
{
"docid": "26796d48a19ea2a1248b5557814802e8",
"text": "In this paper, we investigate the security challenges and issues of cyber-physical systems. (1)We abstract the general workflow of cyber physical systems, (2)identify the possible vulnerabilities, attack issues, adversaries characteristics and a set of challenges that need to be addressed, (3)then we also propose a context-aware security framework for general cyber-physical systems and suggest some potential research areas and problems.",
"title": ""
},
{
"docid": "135deb35cf3600cba8e791d604e26ffb",
"text": "Much of this book describes the algorithms behind search engines and information retrieval systems. By contrast, this chapter focuses on the human users of search systems, and the window through which search systems are seen: the search user interface. The role of the search user interface is to aid in the searcher's understanding and expression of their information needs, and to help users formulate their queries, select among available information sources, understand search results, and keep track of the progress of their search. In the first edition of this book, very little was known about what makes for an effective search interface. In the intervening years, much has become understood about which ideas work from a usability perspective, and which do not. This chapter briefly summarizes the state of the art of search interface design, both in terms of developments in academic research as well as in deployment in commercial systems. The sections that follow discuss how people search, search interfaces today, visualization in search interfaces, and the design and evaluation of search user interfaces. Search tasks range from the relatively simple (e.g., looking up disputed facts or finding weather information) to the rich and complex (e.g., job seeking and planning vacations). Search interfaces should support a range of tasks, while taking into account how people think about searching for information. This section summarizes theoretical models about and empirical observations of the process of online information seeking. Information Lookup versus Exploratory Search User interaction with search interfaces differs depending on the type of task, the amount of time and effort available to invest in the process, and the domain expertise of the information seeker. The simple interaction dialogue used in Web search engines is most appropriate for finding answers to questions or to finding Web sites or other resources that act as search starting points. But, as Marchionini [89] notes, the \" turn-taking \" interface of Web search engines is inherently limited and is many cases is being supplanted by speciality search engines – such as for travel and health information – that offer richer interaction models. Marchionini [89] makes a distinction between information lookup and exploratory search. Lookup tasks are akin to fact retrieval or question answering, and are satisfied by short, discrete pieces of information: numbers, dates, names, or names of files or Web sites. Standard Web search interactions (as well as standard database management system queries) can …",
"title": ""
},
{
"docid": "2800046ff82a5bc43b42c1d2e2dc6777",
"text": "We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update and random fixed point. By varying its two parameters—a positive definite matrix (defining geometry), and a random matrix (sampled in an i.i.d. fashion in each iteration)—we recover a comprehensive array of well known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate.",
"title": ""
},
{
"docid": "22baf516cd64e54fdd74ff6fbb5076a1",
"text": "In a world of global trading, maritime safety, security and efficiency are crucial issues. We propose a multi-task deep learning framework for vessel monitoring using Automatic Identification System (AIS) data streams. We combine recurrent neural networks with latent variable modeling and an embedding of AIS messages to a new representation space to jointly address key issues to be dealt with when considering AIS data streams: massive amount of streaming data, noisy data and irregular time-sampling. We demonstrate the relevance of the proposed deep learning framework on real AIS datasets for a three-task setting, namely trajectory reconstruction, anomaly detection and vessel type identification.",
"title": ""
},
{
"docid": "80425b563740c048d2126b849b23498f",
"text": "Automatic determination of synonyms and/or semantically related words has various applications in Natural Language Processing. Two mainstream paradigms to date, lexicon-based and distributional approaches, both exhibit pros and cons with regard to coverage, complexity, and quality. In this paper, we propose three novel methods—two rule-based methods and one machine learning approach—to identify synonyms from definition texts in a machinereadable dictionary. Extracted synonyms are evaluated in two extrinsic experiments and one intrinsic experiment. Evaluation results show that our pattern-based approach achieves best performance in one of the experiments and satisfactory results in the other, comparable to corpus-based state-of-the-art results.",
"title": ""
},
{
"docid": "79c5085cb9f85dbcd52637a71234c199",
"text": "Abstract: In this paper, a three-phase six-switch standard boost rectifier with unity-power-factor-correction is investigated. A general equation is derived that relates input phase voltage and duty ratios of switches in continuous conduction mode. Based on one of solutions and using One-Cycle Control, a Unified Constant-frequency Integration (UCI) controller for powerfactor-correction (PFC) is proposed. For the standard bridge boost rectifier, unity-power-factor and low total-harmonicdistortion (THD) can be realized in all three phases with a simple circuit that is composed of one integrator with reset along with several flips-flops, comparators, and some logic and linear components. It does not require multipliers and threephase voltage sensors, which are used in many other control approaches. In addition, it employs constant switching frequency modulation that is desirable for industrial applications. The proposed control approach is simple and reliable. Theoretical analysis is verified by simulation and experimental results.",
"title": ""
},
{
"docid": "707947e404b363963d08a9b7d93c87fb",
"text": "The Lexical Substitution task involves selecting and ranking lexical paraphrases for a target word in a given sentential context. We present PIC, a simple measure for estimating the appropriateness of substitutes in a given context. PIC outperforms another simple, comparable model proposed in recent work, especially when selecting substitutes from the entire vocabulary. Analysis shows that PIC improves over baselines by incorporating frequency biases into predictions.",
"title": ""
},
{
"docid": "8bbf5cc2424e0365d6968c4c465fe5f7",
"text": "We describe a method for assigning English tense and aspect in a system that realizes surface text for symbolically encoded narratives. Our testbed is an encoding interface in which propositions that are attached to a timeline must be realized from several temporal viewpoints. This involves a mapping from a semantic encoding of time to a set of tense/aspect permutations. The encoding tool realizes each permutation to give a readable, precise description of the narrative so that users can check whether they have correctly encoded actions and statives in the formal representation. Our method selects tenses and aspects for individual event intervals as well as subintervals (with multiple reference points), quoted and unquoted speech (which reassign the temporal focus), and modal events such as conditionals.",
"title": ""
},
{
"docid": "60a95081e1c82e166e6c1b7ca725b6df",
"text": "At present, solar energy conversion technologies face cost and scalability hurdles in the technologies required for a complete energy system. To provide a truly widespread primary energy source, solar energy must be captured, converted, and stored in a cost-effective fashion. New developments in nanotechnology, biotechnology, and the materials and physical sciences may enable step-change approaches to cost-effective, globally scalable systems for solar energy use.",
"title": ""
},
{
"docid": "40479536efec6311cd735f2bd34605d7",
"text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.",
"title": ""
},
{
"docid": "b07f858d08f40f61f3ed418674948f12",
"text": "Nowadays, due to the great distance between design and implementation worlds, different skills are necessary to create a game system. To solve this problem, a lot of strategies for game development, trying to increase the abstraction level necessary for the game production, were proposed. In this way, a lot of game engines, game frameworks and others, in most cases without any compatibility or reuse criteria between them, were developed. This paper presents a new generative programming approach, able to increase the production of a digital game by the integration of different game development artifacts, following a system family strategy focused on variable and common aspects of a computer game. As result, high level abstractions of games, based on a common language, can be used to configure met programming transformations during the game production, providing a great compatibility level between game domain and game implementation artifacts.",
"title": ""
},
{
"docid": "3939958f235df9dbf7733f946bfa5051",
"text": "This paper presents preliminary findings from our empirical study of the cognition employed by performers in improvisational theatre. Our study has been conducted in a laboratory setting with local improvisers. Participants performed predesigned improv \"games\", which were videotaped and shown to each individual participant for a retrospective protocol collection. The participants were then shown the video again as a group to elicit data on group dynamics, misunderstandings, etc. This paper presents our initial findings that we have built based on our initial analysis of the data and highlights details of interest.",
"title": ""
},
{
"docid": "d4ee96388ca88c0a5d2a364f826dea91",
"text": "Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme.",
"title": ""
},
{
"docid": "9b2f17d76fd0e44059d29083a931f2f1",
"text": "This paper presents a security system based on speaker identification. Mel frequency Cepstral Coefficients{MFCCs} have been used for feature extraction and vector quantization technique is used to minimize the amount of data to be handled .",
"title": ""
},
{
"docid": "6f5afc38b09fa4fd1e47d323cfe850c9",
"text": "In the past several years there has been extensive research into honeypot technologies, primarily for detection and information gathering against external threats. However, little research has been done for one of the most dangerous threats, the advance insider, the trusted individual who knows your internal organization. These individuals are not after your systems, they are after your information. This presentation discusses how honeypot technologies can be used to detect, identify, and gather information on these specific threats.",
"title": ""
},
{
"docid": "ae12d709da329eea3cc8e49c98c21518",
"text": "This paper aims to explore how socialand self-factors may affect consumers’ brand loyalty while they follow companies’ microblogs. Drawing upon the commitment-trust theory, social influence theory, and self-congruence theory, we propose that network externalities, social norms, and self-congruence are the key determinants in the research model. The impacts of these factors on brand loyalty will be mediated by brand trust and brand commitment. We empirically test the model through an online survey on an existing microblogging site. The findings illustrate that network externalities and self-congruence can positively affect brand trust, which subsequently leads to brand commitment and brand loyalty. Meanwhile, social norms, together with self-congruence, directly posit influence on brand commitment. Brand commitment is then positively associated with brand loyalty. We believe that the findings of this research can contribute to the literature. We offer new insights regarding how consumers’ brand loyalty develops from the two social-factors and their self-congruence with the brand. Company managers could also apply our findings to strengthen their relationship marketing with consumers on microblogging sites.",
"title": ""
},
{
"docid": "ed08e93061f2d248f6b70fde6e17b431",
"text": "With the rapid growth of e-commerce, the B2C of e-commerce has been a significant issue. The purpose of this study aims to predict consumers’ purchase intentions by integrating trust and perceived risk into the model to empirically examine the impact of key variables. 705 samples were obtained from online users purchasing from e-vendor of Yahoo! Kimo. This study applied the Structural Equation Model to examine consumers’ online shopping based on the Technology Acceptance Model (TAM). The results indicate that perceived ease of use (PEOU), perceived usefulness (PU), trust, and perceived risk significantly impact purchase intentions both directly and indirectly. Moreover, trust significantly reduced online consumer perceived risk during online shopping. This study provides evidence of the relationship between consumers’ purchase intention, perceived trust and perceived risk to websites of specific e-vendors. Such knowledge may help to inform promotion, designing, and advertising website strategies employed by practitioners.",
"title": ""
}
] |
scidocsrr
|
13870b5bbb44bcfbcee4967a33cf5738
|
Fast Forward Through Opportunistic Incremental Meaning Representation Construction
|
[
{
"docid": "cf2fc7338a0a81e4c56440ec7c3c868e",
"text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.",
"title": ""
},
{
"docid": "2e0d4680cf5953d81f7e8bf8e932e64d",
"text": "Ontological Semantics is an approach to automatically extracting the meaning of natural language texts. The OntoSem text analysis system, developed according to this approach, generates ontologically grounded, disambiguated text meaning representations that can serve as input to intelligent agent reasoning. This article focuses on two core subtasks of overall semantic analysis: lexical disambiguation and the establishment of the semantic dependency structure. In addition to describing the knowledge bases and processors used to carry out these tasks, we introduce a novel evaluation suite suited specifically to knowledge-based systems. To situate this contribution in the field, we critically compare the goals, methods and tasks of Ontological Semantics with those of the currently dominant paradigm of natural language processing, which relies on machine learning.",
"title": ""
},
{
"docid": "c741867c7d29026da910c52be073942d",
"text": "In this report we summarize the results of the SemEval 2016 Task 8: Meaning Representation Parsing. Participants were asked to generate Abstract Meaning Representation (AMR) (Banarescu et al., 2013) graphs for a set of English sentences in the news and discussion forum domains. Eleven sites submitted valid systems. The availability of state-of-the-art baseline systems was a key factor in lowering the bar to entry; many submissions relied on CAMR (Wang et al., 2015b; Wang et al., 2015a) as a baseline system and added extensions to it to improve scores. The evaluation set was quite difficult to parse, particularly due to creative approaches to word representation in the web forum portion. The top scoring systems scored 0.62 F1 according to the Smatch (Cai and Knight, 2013) evaluation heuristic. We show some sample sentences along with a comparison of system parses and perform quantitative ablative studies.",
"title": ""
}
] |
[
{
"docid": "2fba9e2262ecd8faea139ea998a789bb",
"text": "In this paper we propose an accurate and robust image mosaicing method of soccer video taken from a rotating and zooming camera using line tracking and self-calibration. The mosaicing of soccer videos is not easy, because their playing fields are low textured and moving players are included in the fields. Our approach is to track line features on the playing fields. The line features are detected and tracked using a self-calibration technique for a rotating and zooming camera. To track line features efficiently, we propose a new line tracking algorithm, called camera parameter guided line tracking, which works even when the camera motion undergoes sudden changes. Since we do not need to know any model for scenes beforehand, the proposed algorithm can be easily extended to other video sources, as well as other sports videos. Experimental results show the accuracy and robustness of the algorithm. An application of mosaicing is also presented.",
"title": ""
},
{
"docid": "999c7d8d16817d4b991e5b794be3b074",
"text": "Smile detection from facial images is a specialized task in facial expression analysis with many potential applications such as smiling payment, patient monitoring and photo selection. The current methods on this study are to represent face with low-level features, followed by a strong classifier. However, these manual features cannot well discover information implied in facial images for smile detection. In this paper, we propose to extract high-level features by a well-designed deep convolutional networks (CNN). A key contribution of this work is that we use both recognition and verification signals as supervision to learn expression features, which is helpful to reduce same-expression variations and enlarge different-expression differences. Our method is end-to-end, without complex pre-processing often used in traditional methods. High-level features are taken from the last hidden layer neuron activations of deep CNN, and fed into a soft-max classifier to estimate. Experimental results show that our proposed method is very effective, which outperforms the state-of-the-art methods. On the GENKI smile detection dataset, our method reduces the error rate by 21% compared with the previous best method.",
"title": ""
},
{
"docid": "7fd1ac60f18827dbe10bc2c10f715ae9",
"text": "Sentiment analysis in Twitter is a field that has recently attracted research interest. Twitter is one of the most popular microblog platforms on which users can publish their thoughts and opinions. Sentiment analysis in Twitter tackles the problem of analyzing the tweets in terms of the opinion they express. This survey provides an overview of the topic by investigating and briefly describing the algorithms that have been proposed for sentiment analysis in Twitter. The presented studies are categorized according to the approach they follow. In addition, we discuss fields related to sentiment analysis in Twitter including Twitter opinion retrieval, tracking sentiments over time, irony detection, emotion detection, and tweet sentiment quantification, tasks that have recently attracted increasing attention. Resources that have been used in the Twitter sentiment analysis literature are also briefly presented. The main contributions of this survey include the presentation of the proposed approaches for sentiment analysis in Twitter, their categorization according to the technique they use, and the discussion of recent research trends of the topic and its related fields.",
"title": ""
},
{
"docid": "7ecfea8abc9ba29719cdd4bf02e99d5d",
"text": "The literature shows an increase in blended learning implementations (N = 74) at faculties of education in Turkey whereas pre-service and in-service teachers’ ICT competencies have been identified as one of the areas where they are in need of professional development. This systematic review was conducted to find out the impact of blended learning on academic achievement and attitudes at teacher education programs in Turkey. 21 articles and 10 theses complying with all pre-determined criteria (i.e., studies having quantitative research design or at least a quantitative aspect conducted at pre-service teacher education programs) included within the scope of this review. With regard to academic achievement, it was synthesized that majority of the studies confirmed its positive impact on attaining course outcomes. Likewise, blended learning environment was revealed to contribute pre-service teachers to develop positive attitudes towards the courses. It was also concluded that face-to-face aspect of the courses was favoured considerably as it enhanced social interaction between peers and teachers. Other benefits of blended learning were listed as providing various materials, receiving prompt feedback, and tracking progress. Slow internet access, connection failure and anxiety in some pre-service teachers on using ICT were reported as obstacles. Regarding the positive results of blended learning and the significance of ICT integration, pre-service teacher education curricula are suggested to be reconstructed by infusing ICT into entire program through blended learning rather than delivering isolated ICT courses which may thus serve for prospective teachers as catalysts to integrate the use of ICT in their own teaching.",
"title": ""
},
{
"docid": "20ec78dfbfe5b9709f25bd28e0e66e8d",
"text": "BACKGROUND\nElectronic medical records (EMRs) contain vast amounts of data that is of great interest to physicians, clinical researchers, and medial policy makers. As the size, complexity, and accessibility of EMRs grow, the ability to extract meaningful information from them has become an increasingly important problem to solve.\n\n\nMETHODS\nWe develop a standardized data analysis process to support cohort study with a focus on a particular disease. We use an interactive divide-and-conquer approach to classify patients into relatively uniform within each group. It is a repetitive process enabling the user to divide the data into homogeneous subsets that can be visually examined, compared, and refined. The final visualization was driven by the transformed data, and user feedback direct to the corresponding operators which completed the repetitive process. The output results are shown in a Sankey diagram-style timeline, which is a particular kind of flow diagram for showing factors' states and transitions over time.\n\n\nRESULTS\nThis paper presented a visually rich, interactive web-based application, which could enable researchers to study any cohorts over time by using EMR data. The resulting visualizations help uncover hidden information in the data, compare differences between patient groups, determine critical factors that influence a particular disease, and help direct further analyses. We introduced and demonstrated this tool by using EMRs of 14,567 Chronic Kidney Disease (CKD) patients.\n\n\nCONCLUSIONS\nWe developed a visual mining system to support exploratory data analysis of multi-dimensional categorical EMR data. By using CKD as a model of disease, it was assembled by automated correlational analysis and human-curated visual evaluation. The visualization methods such as Sankey diagram can reveal useful knowledge about the particular disease cohort and the trajectories of the disease over time.",
"title": ""
},
{
"docid": "8bafdf88447e61f6c797c614538ff058",
"text": "This paper describes the Adaptive Multirate Wideband (AMR-WB) speech codec recently selected by the Third Generation Partnership Project (3GPP) for GSM and the third generation mobile communication WCDMA system for providing wideband speech services. The AMR-WB speech codec algorithm was selected in December 2000 and the corresponding specifications were approved in March 2001. The AMR-WB codec was also selected by the International Telecommunication Union—Telecommunication Sector (ITU-T) in July 2001 in the standardization activity for wideband speech coding around 16 kb/s and was approved in January 2002 as Recommendation G.722.2. The adoption of AMR-WB by ITU-T is of significant importance since for the first time the same codec is adopted for wireless as well as wireline services. AMR-WB uses an extended audio bandwidth from 50 Hz to 7 kHz and gives superior speech quality and voice naturalness compared to existing secondand third-generation mobile communication systems. The wideband speech service provided by the AMR-WB codec will give mobile communication speech quality that also substantially exceeds (narrowband) wireline quality. The paper details AMR-WB standardization history, algorithmic description including novel techniques for efficient ACELP wideband speech coding and subjective quality performance of the codec.",
"title": ""
},
{
"docid": "4193bd310422b555faa5f6de8a1a94cd",
"text": "Although hundreds of chemical compounds have been identified in grapes and wines, only a few compounds actually contribute to sensory perception of wine flavor. This critical review focuses on volatile compounds that contribute to wine aroma and provides an overview of recent developments in analytical techniques for volatiles analysis, including methods used to identify the compounds that make the greatest contributions to the overall aroma. Knowledge of volatile composition alone is not enough to completely understand the overall wine aroma, however, due to complex interactions of odorants with each other and with other nonvolatile matrix components. These interactions and their impact on aroma volatility are the focus of much current research and are also reviewed here. Finally, the sequencing of the grapevine and yeast genomes in the past approximately 10 years provides the opportunity for exciting multidisciplinary studies aimed at understanding the influences of multiple genetic and environmental factors on grape and wine flavor biochemistry and metabolism (147 references).",
"title": ""
},
{
"docid": "9556a7f345a31989bff1ee85fc31664a",
"text": "The neural basis of variation in human intelligence is not well delineated. Numerous studies relating measures of brain size such as brain weight, head circumference, CT or MRI brain volume to different intelligence test measures, with variously defined samples of subjects have yielded inconsistent findings with correlations from approximately 0 to 0.6, with most correlations approximately 0.3 or 0.4. The study of intelligence in relation to postmortem cerebral volume is not available to date. We report the results of such a study on 100 cases (58 women and 42 men) having prospectively obtained Full Scale Wechsler Adult Intelligence Scale scores. Ability correlated with cerebral volume, but the relationship depended on the realm of intelligence studied, as well as the sex and hemispheric functional lateralization of the subject. General verbal ability was positively correlated with cerebral volume and each hemisphere's volume in women and in right-handed men accounting for 36% of the variation in verbal intelligence. There was no evidence of such a relationship in non-right-handed men, indicating that at least for verbal intelligence, functional asymmetry may be a relevant factor in structure-function relationships in men, but not in women. In women, general visuospatial ability was also positively correlated with cerebral volume, but less strongly, accounting for approximately 10% of the variance. In men, there was a non-significant trend of a negative correlation between visuospatial ability and cerebral volume, suggesting that the neural substrate of visuospatial ability may differ between the sexes. Analyses of additional research subjects used as test cases provided support for our regression models. In men, visuospatial ability and cerebral volume were strongly linked via the factor of chronological age, suggesting that the well-documented decline in visuospatial intelligence with age is related, at least in right-handed men, to the decrease in cerebral volume with age. We found that cerebral volume decreased only minimally with age in women. This leaves unknown the neural substrate underlying the visuospatial decline with age in women. Body height was found to account for 1-4% of the variation in cerebral volume within each sex, leaving the basis of the well-documented sex difference in cerebral volume unaccounted for. With finer testing instruments of specific cognitive abilities and measures of their associated brain regions, it is likely that stronger structure-function relationships will be observed. Our results point to the need for responsibility in the consideration of the possible use of brain images as intelligence tests.",
"title": ""
},
{
"docid": "fe33ff51ca55bf745bdcdf8ee02e2d36",
"text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.",
"title": ""
},
{
"docid": "c35fa79bd405ec0fb6689d395929c055",
"text": "This study examines the potential profit of bull flag technical trading rules using a template matching technique based on pattern recognition for the Nasdaq Composite Index (NASDAQ) and Taiwan Weighted Index (TWI). To minimize measurement error due to data snooping, this study performed a series of experiments to test the effectiveness of the proposed method. The empirical results indicated that all of the technical trading rules correctly predict the direction of changes in the NASDAQ and TWI. This finding may provide investors with important information on asset allocation. Moreover, better bull flag template price fit is associated with higher average return. The empirical results demonstrated that the average return of trading rules conditioned on bull flag significantly better than buying every day for the study period, especially for TWI. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1c05d934574fe3d7f115863067a34b96",
"text": "We present EzPC: a secure two-party computation (2PC) framework that generates efficient 2PC protocols from high-level, easyto-write programs. EzPC provides formal correctness and security guarantees while maintaining performance and scalability. Previous language frameworks, such as CBMC-GC, ObliVM, SMCL, and Wysteria, generate protocols that use either arithmetic or boolean circuits exclusively. Our compiler is the first to generate protocols that combine both arithmetic sharing and garbled circuits for better performance. We empirically demonstrate that the protocols generated by our framework match or outperform (up to 19x) recent works that provide hand-crafted protocols for various functionalities such as secure prediction and matrix factorization.",
"title": ""
},
{
"docid": "3bc48489d80e824efb7e3512eafc6f30",
"text": "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.",
"title": ""
},
{
"docid": "b3a1aba2e9a3cfc8897488bb058f3358",
"text": "The social networking site, Facebook, has gained an enormous amount of popularity. In this article, we review the literature on the factors contributing to Facebook use. We propose a model suggesting that Facebook use is motivated by two primary needs: (1) The need to belong and (2) the need for self-presentation. Demographic and cultural factors contribute to the need to belong, whereas neuroticism, narcissism, shyness, self-esteem and self-worth contribute to the need for self presentation. Areas for future research are discussed.",
"title": ""
},
{
"docid": "fef39a30bb531f40a8cca57afb3709c5",
"text": "To determine Regions of Interest (ROI) in a scene, perceptual saliency of regions has to be measured. When scenes are viewed with the same context and motivation, these ROIs are often highly correlated among different people. As a result, it is possible to develop a computational model of visual attention that can analyze a scene and accurately estimate the location of viewers’ ROIs. Color saliency is investigated in this paper. In particular, a subjective experiment has been carried out to estimate which hues attract more human attention. The performance of the visual attention model including color saliency are assessed in the context of a segmentation evaluation application.",
"title": ""
},
{
"docid": "b4230ce4dbde6d2d01cacd9c2329dc0b",
"text": "This paper proposes a scalable and efficient cacheupdate technique to improve the performance of in-memorycluster computing in Spark, a popular open-source system forbig data computing. Although the memory cache speeds up dataprocessing in Spark, its data immutability constraint requiresreloading the whole RDD when part of its data is updated. Suchconstraint makes the RDD update inefficient. To address thisproblem, we divide an RDD into partitions, and propose thepartial-update RDD (PRDD) method to enable users to replaceindividual partition(s) of an RDD. We devise two solutions to theRDD partition problem – a dynamic programming algorithm anda nonlinear programming method. Experiment results suggestthat, PRDD achieves 4.32x speedup when compared with theoriginal RDD in Spark. We apply PRDD to a billing system forChunghwa Telecomm, the largest telecommunication company inTaiwan. Our result shows that the PRDD based billing systemoutperforms the original billing system in CHT by a factor of24x in throughput. We also evaluate PRDD using the TPC-Hbenchmark, which also yields promising result.",
"title": ""
},
{
"docid": "36d4f65e89fa5187c8b591637c1d933c",
"text": "This paper presents the development of a compact, modular rotary series elastic actuator (SEA) design that can be customized to meet the requirements of a wide range of applications. The concept incorporates flat brushless motors and planetary gearheads instead of expensive harmonic drives and a flat torsional spring design to create a lightweight, lowvolume, easily reconfigurable, and relatively high-performance modular SEA for use in active impedance controlled devices. The key innovations include a Hall effect sensor for direct spring displacement measurements that mitigate the negative impact of backlash on SEA control performance. Both torque and impedance controllers are developed and evaluated using a 1-degree-of-freedom (DoF) prototype of the proposed actuator package. The results demonstrate the performance of a stable first-order impedance controller tested over a range of target impedances. Finally, the flexibility of the modular SEA is demonstrated by configuring it for use in five different actuator specifications designed for use in the uBot-7 mobile manipulator requiring spring stiffnesses from 3 N m/deg to 11.25 N m/deg and peak torque outputs from 12 N m to 45 N m. [DOI: 10.1115/1.4032975]",
"title": ""
},
{
"docid": "c43532ec0c38136c3563568a73e8f3ce",
"text": "BACKGROUND & AIMS\nThe asialoglycoprotein receptor on hepatocyte membranes recognizes the galactose residues of glycoproteins. We investigated the specificity, accuracy and threshold value of asialoglycoprotein receptor imaging for estimating liver reserve via scintigraphy using (111)In-hexavalent lactoside in mouse models.\n\n\nMETHODS\n(111)In-hexavalent lactoside scintigraphy for asialoglycoprotein receptor imaging was performed on groups of normal mice, orthotopic SK-HEP-1-bearing mice, subcutaneous HepG2-bearing mice, mice with 20-80% partial hepatectomy and mice with acute hepatitis induced by acetaminophen. Liver reserve was measured by relative liver uptake and compared with normal mice. Asialoglycoprotein receptor blockade was performed via an in vivo asialofetuin competitive binding assay.\n\n\nRESULTS\nA total of 73.64±7.11% of the injection dose accumulated in the normal liver tissue region, and radioactivity was barely detected in the hepatoma region. When asialoglycoprotein receptor was blocked using asialofetuin, less than 0.41±0.04% of the injection dose was detected as background in the liver. Asialoglycoprotein receptor imaging data revealed a linear correlation between (111)In-hexavalent lactoside binding and residual liver mass (R(2)=0.8548) in 20-80% of partially hepatectomized mice, demonstrating the accuracy of (111)In-hexavalent lactoside imaging for measuring the functional liver mass. Asialoglycoprotein receptor imaging data in mice with liver failure induced using 600mg/kg acetaminophen revealed 19-45% liver reserve relative to normal mice and a fatal threshold value of 25% liver reserve.\n\n\nCONCLUSION\nThe (111)In-hexavalent lactoside imaging method appears to be a good, specific, visual and quantitative predictor of functional liver reserve. The diagnostic threshold for survival was at 25% liver reserve in mice.",
"title": ""
},
{
"docid": "1fb9e7f0baf32da90afe9648cdcb27dd",
"text": "A new generation of designer solvents emerged in the last decade as promising green media for multiple applications, including separation processes: the low-transition-temperature mixtures (LTTMs). They can be prepared by mixing natural high-melting-point starting materials, which form a liquid by hydrogen-bond interactions. Among them, deep-eutectic solvents (DESs) were presented as promising alternatives to conventional ionic liquids (ILs). Some limitations of ILs are overcome by LTTMs, which are cheap and easy to prepare from natural and readily available starting materials, biodegradable, and renewable.",
"title": ""
},
{
"docid": "f6c8e3afce6f47dd80ed4fadc68dc1f0",
"text": "PURPOSE\nThe CD20 B-lymphocyte surface antigen expressed by B-cell lymphomas is an attractive target for radioimmunotherapy, treatment using radiolabeled antibodies. We conducted a phase I dose-escalation trial to assess the toxicity, tumor targeting, and efficacy of nonmyeloablative doses of an anti-CD20 monoclonal antibody (anti-B1) labeled with iodine-131 (131I) in 34 patients with B-cell lymphoma who had failed chemotherapy.\n\n\nPATIENTS AND METHODS\nPatients were first given tracelabeled doses of 131I-labeled anti-B1 (15 to 20 mg, 5 mCi) to assess radiolabeled antibody biodistribution, and then a radioimmunotherapeutic dose (15 to 20 mg) labeled with a quantity of 131I that would deliver a specified centigray dose of whole-body radiation predicted by the tracer dose. Whole-body radiation doses were escalated from 25 to 85 cGy in sequential groups of patients in 10-cGy increments. To evaluate if radiolabeled antibody biodistribution could be optimized, initial patients were given one or two additional tracer doses on successive weeks, each dose preceded by an infusion of 135 mg of unlabeled anti-B1 one week and 685 mg the next. The unlabeled antibody dose resulting in the most optimal tracer biodistribution was also given before the radioimmunotherapeutic dose. Later patients were given a single tracer dose and radioimmunotherapeutic dose preceded by infusion of 685 mg of unlabeled anti-B1.\n\n\nRESULTS\nTreatment was well tolerated. Hematologic toxicity was dose-limiting, and 75 cGy was established as the maximally tolerated whole-body radiation dose. Twenty-eight patients received radioimmunotherapeutic doses of 34 to 161 mCi, resulting in complete remission in 14 patients and a partial response in eight. All 13 patients with low-grade lymphoma responded, and 10 achieved a complete remission. Six of eight patients with transformed lymphoma responded. Thirteen of 19 patients whose disease was resistant to their last course of chemotherapy and all patients with chemotherapy-sensitive disease responded. The median duration of complete remission exceeds 16.5 months. Six patients remain in complete remission 16 to 31 months after treatment.\n\n\nCONCLUSION\nNonmyeloablative radioimmunotherapy with 131I-anti-B1 is associated with a high rate of durable remissions in patients with B-cell lymphoma refractory to chemotherapy.",
"title": ""
},
{
"docid": "ed888adc25f012b9550fc53f30a9332d",
"text": "BACKGROUND\nThe PedsQL Measurement Model was designed to measure health-related quality of life (HRQOL) in children and adolescents. The PedsQL 4.0 Generic Core Scales were developed to be integrated with the PedsQL Disease-Specific Modules. The newly developed PedsQL Family Impact Module was designed to measure the impact of pediatric chronic health conditions on parents and the family. The PedsQL Family Impact Module measures parent self-reported physical, emotional, social, and cognitive functioning, communication, and worry. The Module also measures parent-reported family daily activities and family relationships.\n\n\nMETHODS\nThe 36-item PedsQL Family Impact Module was administered to 23 families of medically fragile children with complex chronic health conditions who either resided in a long-term care convalescent hospital or resided at home with their families.\n\n\nRESULTS\nInternal consistency reliability was demonstrated for the PedsQL Family Impact Module Total Scale Score (alpha = 0.97), Parent HRQOL Summary Score (alpha = 0.96), Family Functioning Summary Score (alpha = 0.90), and Module Scales (average alpha = 0.90, range = 0.82 - 0.97). The PedsQL Family Impact Module distinguished between families with children in a long-term care facility and families whose children resided at home.\n\n\nCONCLUSIONS\nThe results demonstrate the preliminary reliability and validity of the PedsQL Family Impact Module in families with children with complex chronic health conditions. The PedsQL Family Impact Module will be further field tested to determine the measurement properties of this new instrument with other pediatric chronic health conditions.",
"title": ""
}
] |
scidocsrr
|
8bd1523380624e72343daa43e272e8a3
|
Middle School Students ' Perceptions on Academic Motivation and Student Engagement
|
[
{
"docid": "040fbc1d1d75855fbf15f47880c2aefd",
"text": "The emotional connections students foster in their classrooms are likely to impact their success in school. Using a multimethod, multilevel approach, this study examined the link between classroom emotional climate and academic achievement, including the role of student engagement as a mediator. Data were collected from 63 fifthand sixth-grade classrooms (N 1,399 students) and included classroom observations, student reports, and report card grades. As predicted, multilevel mediation analyses showed that the positive relationship between classroom emotional climate and grades was mediated by engagement, while controlling for teacher characteristics and observations of both the organizational and instructional climates of the classrooms. Effects were robust across grade level and student gender. The discussion highlights the role of classroom-based, emotion-related interactions to promote academic achievement.",
"title": ""
},
{
"docid": "5275184686a8453a1922cec7a236b66d",
"text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.",
"title": ""
}
] |
[
{
"docid": "e11a1e3ef5093aa77797463b7b8994ea",
"text": "Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human–robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.",
"title": ""
},
{
"docid": "f17f2e754149474ea879711dc5bcd087",
"text": "In grasping, shape adaptation between hand and object has a major influence on grasp success. In this paper, we present an approach to grasping unknown objects that explicitly considers the effect of shape adaptability to simplify perception. Shape adaptation also occurs between the hand and the environment, for example, when fingers slide across the surface of the table to pick up a small object. Our approach to grasping also considers environmental shape adaptability to select grasps with high probability of success. We validate the proposed shape-adaptability-aware grasping approach in 880 real-world grasping trials with 30 objects. Our experiments show that the explicit consideration of shape adaptability of the hand leads to robust grasping of unknown objects. Simple perception suffices to achieve this robust grasping behavior.",
"title": ""
},
{
"docid": "c6fae566f6193c75679c2ed9dc433b1c",
"text": "Concept prerequisite learning focuses on machine learning methods for measuring the prerequisite relation among concepts. With the importance of prerequisites for education, it has recently become a promising research direction. A major obstacle to extracting prerequisites at scale is the lack of large scale labels which will enable effective data driven solutions. We investigate the applicability of active learning to concept prerequisite learning. We propose a novel set of features tailored for prerequisite classification and compare the effectiveness of four widely used query strategies. Experimental results for domains including data mining, geometry, physics, and precalculus show that active learning can be used to reduce the amount of training data required. Given the proposed features, the query-by-committee strategy outperforms other compared query strategies.",
"title": ""
},
{
"docid": "589da022358bee9f14b337db42536067",
"text": "To represent a text as a bag of properly identified “phrases” and use the representation for processing the text is proved to be useful. The key question here is how to identify the phrases and represent them. The traditional method of utilizing n-grams can be regarded as an approximation of the approach. Such a method can suffer from data sparsity, however, particularly when the length of n-gram is large. In this paper, we propose a new method of learning and utilizing task-specific distributed representations of n-grams, referred to as “region embeddings”. Without loss of generality we address text classification. We specifically propose two models for region embeddings. In our models, the representation of a word has two parts, the embedding of the word itself, and a weighting matrix to interact with the local context, referred to as local context unit. The region embeddings are learned and used in the classification task, as parameters of the neural network classifier. Experimental results show that our proposed method outperforms existing methods in text classification on several benchmark datasets. The results also indicate that our method can indeed capture the salient phrasal expressions in the texts.",
"title": ""
},
{
"docid": "25972b941fa2bb5f338cf9094edec35c",
"text": "Adynamic bone disease (ABD) is a well-recognized clinical entity in the complex chronic kidney disease (CKD)-mineral and bone disorder. Although the combination of low intact parathyroid hormone (PTH) and low bone alkaline phosphatase levels may be suggestive of ABD, the gold standard for precise diagnosis is histomorphometric analysis of tetracycline double-labeled bone biopsies. ABD essentially is characterized by low bone turnover, low bone volume, normal mineralization, and markedly decreased cellularity with minimal or no fibrosis. ABD is increasing in prevalence relative to other forms of renal osteodystrophy, and is becoming the most frequent type of bone lesion in some series. ABD develops in situations with reduced osteoanabolic stimulation caused by oversuppression of PTH, multifactorial skeletal resistance to PTH actions in uremia, and/or dysregulation of Wnt signaling. All may contribute not only to bone disease but also to the early vascular calcification processes observed in CKD. Various risk factors have been linked to ABD, including calcium loading, ageing, diabetes, hypogonadism, parathyroidectomy, peritoneal dialysis, and antiresorptive therapies, among others. The relationship between low PTH level, ABD, increased risk fracture, and vascular calcifications may at least partially explain the association of ABD with increased mortality rates. To achieve optimal bone and cardiovascular health, attention should be focused not only on classic control of secondary hyperparathyroidism but also on prevention of ABD, especially in the steadily growing proportions of diabetic, white, and elderly patients. Overcoming the insufficient osteoanabolic stimulation in ABD is the ultimate treatment goal.",
"title": ""
},
{
"docid": "31328dfaa5142cfe3d1394febdc24119",
"text": "When developing new products, it is important to understand customer perception towards consumer products. It is because the success of new products is heavily dependent on the associated customer satisfaction level. If customers are satisfied with a new product, the chance of the product being successful in marketplaces would be higher. Various approaches have been attempted to model the relationship between customer satisfaction and design attributes of products. In this paper, a particle swarm optimization (PSO) based ANFIS approach to modeling customer satisfaction is proposed for improving the modeling accuracy. In the approach, PSO is employed to determine the parameters of an ANFIS from which better customer satisfaction models in terms of modeling accuracy can be generated. A notebook article swarm optimization NFIS computer design is used as an example to illustrate the approach. To evaluate the effectiveness of the proposed approach, modeling results based on the proposed approach are compared with those based on the fuzzy regression (FR), ANFIS and genetic algorithm (GA)-based ANFIS approaches. The comparisons indicate that the proposed approach can effectively generate customer satisfaction models and that their modeling results outperform those based on the other three methods in terms of mean absolute errors and variance of errors.",
"title": ""
},
{
"docid": "1ed4870aa4c75394938f8084cec30e8f",
"text": "The high voltage gain converter is widely employed in renewable energy conversions such as photovoltaic and fuel cell power generation systems. An interleaved high step-up DC-DC converter with coupled inductor and voltage multiplier cell is proposed in this paper. The voltage multiplier cell is composed of two diodes, two capacitors and two coupled inductors. During the switch-on interval in each phase, the corresponding capacitors are charged in parallel by the coupled inductors. Similarly, during the switch-off interval in the same phase, the corresponding capacitors are discharged in series to pump their energy to the load. In this way the proposed converter can achieve high voltage conversion ratio. Other features of the proposed converter are low voltage stress across the main switches due to clamp capacitors, lower voltage stress across diodes compared to the conventional converters, low input current ripple due to interleaved structure, and alleviation of the diodes reverse recovery problems. Switches with lower RDS-ON can be used that decreases conduction losses. The principle operation of the proposed converter is given by detailed mathematical analysis. Performance of the proposed converter is validated by simulation results in PSCAD-EMTDC.",
"title": ""
},
{
"docid": "c2b1dd2d2dd1835ed77cf6d43044eed8",
"text": "The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the handengineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.",
"title": ""
},
{
"docid": "6f00b925e8330fe3d673d8ed9fd646bb",
"text": "There is an increasing amount of evidence that during mental fatigue, shifts in motivation drive performance rather than reductions in finite mental energy. So far, studies that investigated such an approach have mainly focused on cognitive indicators of task engagement that were measured during controlled tasks, offering limited to no alternative stimuli. Therefore it remained unclear whether during fatigue, attention is diverted to stimuli that are unrelated to the task, or whether fatigued individuals still focused on the task but were unable to use their cognitive resources efficiently. With a combination of subjective, EEG, pupil, eye-tracking, and performance measures the present study investigated the influence of mental fatigue on a cognitive task which also contained alternative task-unrelated stimuli. With increasing time-on-task, task engagement and performance decreased, but there was no significant decrease in gaze toward the task-related stimuli. After increasing the task rewards, irrelevant rewarding stimuli where largely ignored, and task engagement and performance were restored, even though participants still reported to be highly fatigued. Overall, these findings support an explanation of less efficient processing of the task that is influenced by motivational cost/reward tradeoffs, rather than a depletion of a finite mental energy resource. (PsycINFO Database Record",
"title": ""
},
{
"docid": "338a8efaaf4a790b508705f1f88872b2",
"text": "During the past several years, fuzzy control has emerged as one of the most active and fruitful areas for research in the applications of fuzzy set theory, especially in the realm of industrial processes, which do not lend themselves to control by conventional methods because of a lack of quantitative data regarding the input-output relations. Fuzzy control is based on fuzzy logic-a logical system that is much closer in spirit to human thinking and natural language than traditional logical systems. The fuzzy logic controller (FLC) based on fuzzy logic provides a means of converting a linguistic control strategy based on expert knowledge into an automatic control strategy. A survey of the FLC is presented ; a general methodology for constructing an FLC and assessing its performance is described; and problems that need further research are pointed out. In particular, the exposition includes a discussion of fuzzification and defuzzification strategies, the derivation of the database and fuzzy control rules, the definition of fuzzy implication, and an analysis of fuzzy reasoning mechanisms. A may be regarded as a means of emulating a skilled human operator. More generally, the use of an FLC may be viewed as still another step in the direction of model-ing human decisionmaking within the conceptual framework of fuzzy logic and approximate reasoning. In this context, the forward data-driven inference (generalized modus ponens) plays an especially important role. In what follows, we shall investigate fuzzy implication functions, the sentence connectives and and also, compositional operators, inference mechanisms, and other concepts that are closely related to the decisionmaking logic of an FLC. In general, a fuzzy control rule is a fuzzy relation which is expressed as a fuzzy implication. In fuzzy logic, there are many ways in which a fuzzy implication may be defined. The definition of a fuzzy implication may be expressed as a fuzzy implication function. The choice of a fuzzy implication function reflects not only the intuitive criteria for implication but also the effect of connective also. I) Basic Properties of a Fuuy Implication Function: The choice of a fuzzy implication function involves a number of criteria, which are discussed in considered the following basic characteristics of a fuzzy implication function: fundamental property, smoothness property, unrestricted inference, symmetry of generalized modus ponens and generalized modus tollens, and a measure of propagation of fuzziness. All of these properties are justified on purely intuitive grounds. We prefer to say …",
"title": ""
},
{
"docid": "de6de62ab783eb1b0a9347a6fa8dcacb",
"text": "The human face is among the most significant objects in an image or video, it contains many important information and specifications, also is required to be the cause of almost all achievable look variants caused by changes in scale, location, orientation, pose, facial expression, lighting conditions and partial occlusions. It plays a key role in face recognition systems and many other face analysis applications. We focus on the feature based approach because it gave great results on detect the human face. Face feature detection techniques can be mainly divided into two kinds of approaches are Feature base and image base approach. Feature base approach tries to extract features and match it against the knowledge of the facial features. This paper gives the idea about challenging problems in the field of human face analysis and as such, as it has achieved a great attention over the last few years because of its many applications in various domains. Furthermore, several existing face detection approaches are analyzed and discussed and attempt to give the issues regarding key technologies of feature base methods, we had gone direct comparisons of the method's performance are made where possible and the advantages/ disadvantages of different approaches are discussed.",
"title": ""
},
{
"docid": "f0ca75d480ca80ab9c3f8ea35819d064",
"text": "Purpose – The purpose of this paper is to evaluate the influence of psychological hardiness, social judgment, and “Big Five” personality dimensions on leader performance in U.S. military academy cadets at West Point. Design/methodology/approach – Army Cadets were studied in two different organizational contexts: (a)summer field training, and (b)during academic semesters. Leader performance was measured with leadership grades (supervisor ratings) aggregated over four years at West Point. Findings After controlling for general intellectual abilities, hierarchical regression results showed leader performance in the summer field training environment is predicted by Big Five Extraversion, and Hardiness, and a trend for Social Judgment. During the academic period context, leader performance is predicted by mental abilities, Big Five Conscientiousness, and Hardiness, with a trend for Social Judgment. Research limitations/implications Results confirm the importance of psychological hardiness, extraversion, and conscientiousness as factors influencing leader effectiveness, and suggest that social judgment aspects of emotional intelligence can also be important. These results also show that different Big Five personality factors may influence leadership in different organizational",
"title": ""
},
{
"docid": "8e8ed9826c1d0e767eced89259cf5d1e",
"text": "Forensic investigators should acquire and analyze large amount of digital evidence and submit to the court the technical truth about facts in virtual worlds. Since digital evidence is complex, diffuse, volatile and can be accidentally or improperly modified after acquired, the chain of custody must ensure that collected evidence can be accepted as truthful by the court. In this scenario, traditional paper-based chain of custody is inefficient and cannot guarantee that the forensic processes follow legal and technical principles in an electronic society. Computer forensics practitioners use forensic software to acquire copies or images from electronic devices and register associated metadata, like computer hard disk serial number and practitioner name. Usually, chain of custody software and data are insufficient to guarantee to the court the quality of forensic images, or guarantee that only the right person had access to the evidence or even guarantee that copies and analysis only were made by authorized manipulations and in the acceptable addresses. Recent developments in forensic software make possible to collect in multiple locations and analysis in distributed environments. In this work we propose the use of the new network facilities existing in Advanced Forensic Format (AFF), an open and extensible format designed for forensic tolls, to increase the quality of electronic chain of custody.",
"title": ""
},
{
"docid": "c2fb2e46eea33dcf9ec1872de5d57272",
"text": "Computational Drug Discovery, which uses computational techniques to facilitate and improve the drug discovery process, has aroused considerable interests in recent years. Drug Repositioning (DR) and DrugDrug Interaction (DDI) prediction are two key problems in drug discovery and many computational techniques have been proposed for them in the last decade. Although these two problems have mostly been researched separately in the past, both DR and DDI can be formulated as the problem of detecting positive interactions between data entities (DR is between drug and disease, and DDI is between pairwise drugs). The challenge in both problems is that we can only observe a very small portion of positive interactions. In this paper, we propose a novel framework called Dyadic PositiveUnlabeled learning (DyPU) to solve the problem of detecting positive interactions. DyPU forces positive data pairs to rank higher than the average score of unlabeled data pairs. Moreover, we also derive the dual formulation of the proposed method with the rectifier scoring function and we show that the associated non-trivial proximal operator admits a closed form solution. Extensive experiments are conducted on real drug data sets and the results show that our method achieves superior performance comparing with the state-of-the-art.",
"title": ""
},
{
"docid": "6fe71d8d45fa940f1a621bfb5b4e14cd",
"text": "We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources. Attract-Repel facilitates the use of constraints from mono- and cross-lingual resources, yielding semantically specialized cross-lingual vector spaces. Our evaluation shows that the method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones. The effectiveness of our approach is demonstrated with state-of-the-art results on semantic similarity datasets in six languages. We next show that Attract-Repel-specialized vectors boost performance in the downstream task of dialogue state tracking (DST) across multiple languages. Finally, we show that cross-lingual vector spaces produced by our algorithm facilitate the training of multilingual DST models, which brings further performance improvements.",
"title": ""
},
{
"docid": "056b463607e21e9a0183a49627245d28",
"text": "A novel and robust automated docking method that predicts the bound conformations of flexible ligands to macromolecular targets has been developed and tested, in combination with a new scoring function that estimates the free energy change upon binding. Interestingly, this method applies a Lamarckian model of genetics, in which environmental adaptations of an individual’s phenotype are reverse transcribed into its genotype and become Ž . heritable traits sic . We consider three search methods, Monte Carlo simulated annealing, a traditional genetic algorithm, and the Lamarckian genetic algorithm, and compare their performance in dockings of seven protein�ligand test systems having known three-dimensional structure. We show that both the traditional and Lamarckian genetic algorithms can handle ligands with more degrees of freedom than the simulated annealing method used in earlier versions of AUTODOCK, and that the Lamarckian genetic algorithm is the most efficient, reliable, and successful of the three. The empirical free energy function was calibrated using a set of 30 structurally known protein�ligand complexes with experimentally determined binding constants. Linear regression analysis of the observed binding constants in terms of a wide variety of structure-derived molecular properties was performed. The final model had a residual standard �1 Ž �1 . error of 9.11 kJ mol 2.177 kcal mol and was chosen as the new energy Correspondence to: A. J. Olson; e-mail: olson@scripps.edu Contract�grant sponsor: National Institutes of Health, contract�grant numbers: GM48870, RR08065 ( ) Journal of Computational Chemistry, Vol. 19, No. 14, 1639�1662 1998 � 1998 John Wiley & Sons, Inc. CCC 0192-8651 / 98 / 141639-24",
"title": ""
},
{
"docid": "3c3bf9455bd5fef1b5649f50f020f564",
"text": "There is a need for reliable lighting design applications because available tools are limited and inappropriate for interactive or creative use. Architects and lighting designers need those applications to define, predict, test and validate lighting solutions for their problems. We present a new approach to the lighting design problem based on a methodology that includes the geometry of the scene, the properties of materials and the design goals. It is possible to obtain luminaire characteristics or other kind of results that maximise the attainment of the design goals, which may include different types of constraints or objectives (lighting, geometrical or others). The main goal, in our approach, is to improve the lighting design cycle. In this work we discuss the use of optimisation in lighting design, describe the implementation of the methodology, present real-world based examples and analyse in detail some of the complex technical problems associated and speculate on how to overcome them.",
"title": ""
},
{
"docid": "fbf30d2032b0695b5ab2d65db2fe8cbc",
"text": "Artificial Intelligence for computer games is an interesting topic which attracts intensive attention recently. In this context, Mario AI Competition modifies a Super Mario Bros game to be a benchmark software for people who program AI controller to direct Mario and make him overcome the different levels. This competition was handled in the IEEE Games Innovation Conference and the IEEE Symposium on Computational Intelligence and Games since 2009. In this paper, we study the application of Reinforcement Learning to construct a Mario AI controller that learns from the complex game environment. We train the controller to grow stronger for dealing with several difficulties and types of levels. In controller developing phase, we design the states and actions cautiously to reduce the search space, and make Reinforcement Learning suitable for the requirement of online learning.",
"title": ""
},
{
"docid": "83fe89f0f70456ac6da216790b44cede",
"text": "-There are often serious discrepancy existing between the recorded image and the direct observation of the same scene. Human visual system is able to distinguish details and vivid colors in shadows and in scenes that contain illuminant shifts. In this paper it is presented an image enhancement algorithm called Multiscale Retinex with Color Restoration (MSRCR). The MSRCR algorithm tries to imitate human visual “computation” while observing scenes that contains lighting variations. MSRCR is an extension of a former algorithm called Single Scale center/surround Retinex (SSR) and its extension Multi Scale center/surround Retinex (MSR).MSRCR achieves simultaneous dynamic range compression, color consistency and lightness rendition. To reduce the computational effort the two dimensional filtering between surround function and the image function is performed in the frequency domain by finding the product of spectra of both the functions. Keywords—Retinex, Single Scale Retinex, Multi Scale Retinex, Multi Scale Retinex with Color Restoration, dynamic range compression, surrounding function. Introduction: Image enhancement improves the quality (clarity) of images for human viewing. Removing blurring and noise, increasing contrast, and revealing details are examples of image enhancement operations. Reducing the noise and blurring and increasing the contrast range could enhance the image. Image processing technology is used by planetary scientists to enhance images of Mars, Venus, or other planets. Doctors use this technology to manipulate CAT scans and MRI images. There are often serious discrepancy existing between the images and the direct observation of the real scenes. The human perception has natures of dynamic range compression and color rendition on the scenes. It can compute the details across a large range of spectral and lightness variations. So it is color constant. By comparison, the recorded films and other electric cameras, have no such computations before the scenes are recorded in the dynamic range limited media. Even with wide dynamic range imaging systems, the recorded images will not be seen same as real observation. This is because the dynamic range compression for perception of the recorded images is weaker than that for the scene itself. This could be explained as that in the real world, the wider angular extent helps to improve the dynamic range compression. The idea of Retinex was proposed as a model of lightness and color perception of the human vision. Obviously it is not only a model, but also could be developed to algorithms of image enhancement. Later single-scale Retinex (SSR) was defined as an implementation of center/surround Retinex. But depending on the special scale, it can either provide dynamic range compression (small scale) or tonal rendition (large scale). Superposition of weighted different scale SSR is obvious a choice to balance these two effects. This is multiscale Retinex (MSR). For color images, if the content is out of “gray world”, which means the spatial averages of three color bands are far from equal, the output will be forced to be gray by MSR. This problem could be solved by introducing weight factor for different channels in multiscale Retinex with color restoration (MSRCR). Image Enhancement using Retinex Algorithms: Land [1] first proposed the idea of Retinex as a model of lightness and color perception of the human vision. Obviously it is not only a model, but also could be developed to algorithms of image enhancement. Land gave more contributions on Retinex algorithm, evolving the concept from a random walk computation, to his latest version of a center/surround spatially opponent operation. The center/surround opponent operation is related to the neurophysiological functions of neurons in the primate retina, lateral geniculate nucleus, and cerebral cortex. Hurlbert [2] studied the lightness theories and found that they have a common mathematical foundation. Also the leaning problems for artificial neural networks suggested a solution with center/surround form. But that is not enough. The human vision does not determine the relative reflectances, but rather content dependent relative reflectances for arbitrary illumination conditions. Researchers defined a single-scale Retinex (SSR), which is an implementation of center/surround Retinex. But depending on the special scale, it can either provide dynamic range compression (small scale) or tonal rendition (large scale). Superposition of weighted different scale SSR is obvious choice to balance these two effects. This is multiscale Retinex (MSR). For color images, if the content is out of “gray world”, which means the spatial averages of three color bands are far from equal, the output will be forced to be gray by MSR. This problem could be solved by introducing weight factor for different channels in multiscale Retinex with color restoration (MSRCR). After MSRCR, generally the outputs will be out of IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 2, Issue 4, Aug-Sept, 2014 ISSN: 2320 – 8791 (Impact Factor: 1.479) www.ijreat.org",
"title": ""
},
{
"docid": "2d829db0d8781f4d80f2402e9c733c82",
"text": "We describe an update of the miRBase database (http://www.mirbase.org/), the primary microRNA sequence repository. The latest miRBase release (v20, June 2013) contains 24 521 microRNA loci from 206 species, processed to produce 30 424 mature microRNA products. The rate of deposition of novel microRNAs and the number of researchers involved in their discovery continue to increase, driven largely by small RNA deep sequencing experiments. In the face of these increases, and a range of microRNA annotation methods and criteria, maintaining the quality of the microRNA sequence data set is a significant challenge. Here, we describe recent developments of the miRBase database to address this issue. In particular, we describe the collation and use of deep sequencing data sets to assign levels of confidence to miRBase entries. We now provide a high confidence subset of miRBase entries, based on the pattern of mapped reads. The high confidence microRNA data set is available alongside the complete microRNA collection at http://www.mirbase.org/. We also describe embedding microRNA-specific Wikipedia pages on the miRBase website to encourage the microRNA community to contribute and share textual and functional information.",
"title": ""
}
] |
scidocsrr
|
1638fc528fd71846a21e44045d560070
|
Relationship between attachment styles and happiness in medical students
|
[
{
"docid": "e89cf17cf4d336468f75173767af63a5",
"text": "This article explores the possibility that romantic love is an attachment process--a biosocial process by which affectional bonds are formed between adult lovers, just as affectional bonds are formed earlier in life between human infants and their parents. Key components of attachment theory, developed by Bowlby, Ainsworth, and others to explain the development of affectional bonds in infancy, were translated into terms appropriate to adult romantic love. The translation centered on the three major styles of attachment in infancy--secure, avoidant, and anxious/ambivalent--and on the notion that continuity of relationship style is due in part to mental models (Bowlby's \"inner working models\") of self and social life. These models, and hence a person's attachment style, are seen as determined in part by childhood relationships with parents. Two questionnaire studies indicated that relative prevalence of the three attachment styles is roughly the same in adulthood as in infancy, the three kinds of adults differ predictably in the way they experience romantic love, and attachment style is related in theoretically meaningful ways to mental models of self and social relationships and to relationship experiences with parents. Implications for theories of romantic love are discussed, as are measurement problems and other issues related to future tests of the attachment perspective.",
"title": ""
},
{
"docid": "c5beaa8be086776c769caedc30815aa8",
"text": "Three studies were conducted to examine the correlates of adult attachment. In Study 1, an 18-item scale to measure adult attachment style dimensions was developed based on Kazan and Shaver's (1987) categorical measure. Factor analyses revealed three dimensions underlying this measure: the extent to which an individual is comfortable with closeness, feels he or she can depend on others, and is anxious or fearful about such things as being abandoned or unloved. Study 2 explored the relation between these attachment dimensions and working models of self and others. Attachment dimensions were found to be related to self-esteem, expressiveness, instrumentality, trust in others, beliefs about human nature, and styles of loving. Study 3 explored the role of attachment style dimensions in three aspects of ongoing dating relationships: partner matching on attachment dimensions; similarity between the attachment of one's partner and caregiving style of one's parents; and relationship quality, including communication, trust, and satisfaction. Evidence was obtained for partner matching and for similarity between one's partner and one's parents, particularly for one's opposite-sex parent. Dimensions of attachment style were strongly related to how each partner perceived the relationship, although the dimension of attachment that best predicted quality differed for men and women. For women, the extent to which their partner was comfortable with closeness was the best predictor of relationship quality, whereas the best predictor for men was the extent to which their partner was anxious about being abandoned or unloved.",
"title": ""
}
] |
[
{
"docid": "07409cd81cc5f0178724297245039878",
"text": "In recent years, the number of sensor network deployments for real-life applications has rapidly increased and it is expected to expand even more in the near future. Actually, for a credible deployment in a real environment three properties need to be fulfilled, i.e., energy efficiency, scalability and reliability. In this paper we focus on IEEE 802.15.4 sensor networks and show that they can suffer from a serious MAC unreliability problem, also in an ideal environment where transmission errors never occur. This problem arises whenever power management is enabled - for improving the energy efficiency - and results in a very low delivery ratio, even when the number of nodes in the network is very low (e.g., 5). We carried out an extensive analysis, based on simulations and real measurements, to investigate the ultimate reasons of this problem. We found that it is caused by the default MAC parameter setting suggested by the 802.15.4 standard. We also found that, with a more appropriate parameter setting, it is possible to achieve the desired level of reliability (as well as a better energy efficiency). However, in some scenarios this is possible only by choosing parameter values formally not allowed by the standard.",
"title": ""
},
{
"docid": "1cf7185a9cff83de1a1f77e736647f77",
"text": "The ability to detect mental states, whether relaxation or stressed, would be useful in categorizing places according to their impact on our brains and many other domains. Newly available, affordable and dry-electrode devices make electroencephalography headsets (EEG) feasible to use outside the lab, for example in open spaces and shopping malls. The purpose of this pervasive experimental manipulation is to analyze brain signals in order to label outdoor places according to how users perceive them with a focus on ---relaxing and ---stressful mental states. That is, when the user is experiencing tranquil brain waves or not when visiting a particular place. This paper demonstrates the potential of exploiting the temporal structure of EEG signals in making sense of outdoor places. The EEG signals induced by the place stimuli are analyzed and exploited to distinguish what we refer to as a place signature.",
"title": ""
},
{
"docid": "29ea1cfa755ae438f989d41d85dfefaa",
"text": "Early case studies and noncontrolled trial studies focusing on the treatment of delusions and hallucinations have laid the foundation for more recent developments in comprehensive cognitive behavioral therapy (CBT) interventions for schizophrenia. Seven randomized, controlled trial studies testing the efficacy of CBT for schizophrenia were identified by electronic search (MEDLINE and PsychInfo) and by personal correspondence. After a review of these studies, effect size (ES) estimates were computed to determine the statistical magnitude of clinical change in CBT and control treatment conditions. CBT has been shown to produce large clinical effects on measures of positive and negative symptoms of schizophrenia. Patients receiving routine care and adjunctive CBT have experienced additional benefits above and beyond the gains achieved with routine care and adjunctive supportive therapy. These results reveal promise for the role of CBT in the treatment of schizophrenia although additional research is required to test its efficacy, long-term durability, and impact on relapse rates and quality of life. Clinical refinements are needed also to help those who show only minimal benefit with the intervention.",
"title": ""
},
{
"docid": "7175d7767b2fc227136863bdec145dc2",
"text": "In this letter, a tapered slot ultrawide band (UWB) Vivaldi antenna with enhanced gain having band notch characteristics in the WLAN/WiMAX band is presented. In this framework, a reference tapered slot Vivaldi antenna is first designed for UWB operation that is, 3.1–10.6 GHz using the standard procedure. The band-notch operation at 4.8 GHz is achieved with the help of especially designed complementary split ring resonator (CSRR) cell placed near the excitation point of the antenna. Further, the gain of the designed antenna is enhanced substantially with the help of anisotropic zero index metamaterial (AZIM) cells, which are optimized and positioned on the substrate in a particular fashion. In order to check the novelty of the design procedure, three distinct Vivaldi structures are fabricated and tested. Experimental data show quite good agreement with the simulated results. As the proposed antenna can minimize the electromagnetic interference (EMI) caused by the IEEE 802.11 WLAN/WiMAX standards, it can be used more efficiently in the UWB frequency band. VC 2016 Wiley Periodicals, Inc. Microwave Opt Technol Lett 58:233–238, 2016; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.29534",
"title": ""
},
{
"docid": "754c7cd279c8f3c1a309071b8445d6fa",
"text": "We present a framework for describing insiders and their actions based on the organization, the environment, the system, and the individual. Using several real examples of unwelcome insider action (hard drive removal, stolen intellectual property, tax fraud, and proliferation of e-mail responses), we show how the taxonomy helps in understanding how each situation arose and could have been addressed. The differentiation among types of threats suggests how effective responses to insider threats might be shaped, what choices exist for each type of threat, and the implications of each. Future work will consider appropriate strategies to address each type of insider threat in terms of detection, prevention, mitigation, remediation, and punishment.",
"title": ""
},
{
"docid": "b0087e2afdf5a1abc5046782279529a5",
"text": "The rapid development of Community Question Answering (CQA) satisfies users’ quest for professional and personal knowledge about anything. In CQA, one central issue is to find users with expertise and willingness to answer the given questions. Expert finding in CQA often exhibits very different challenges compared to traditional methods. The new features of CQA (such as huge volume, sparse data and crowdsourcing) violate fundamental assumptions of traditional recommendation systems. This paper focuses on reviewing and categorizing the current progress on expert finding in CQA. We classify the recent solutions into four different categories: matrix factorization based models (MF-based models), gradient boosting tree based models (GBT-based models), deep learning based models (DL-based models) and ranking based models (R-based models). We find that MF-based models outperform other categories of models in the crowdsourcing situation. Moreover, we use innovative diagrams to clarify several important concepts of ensemble learning, and find that ensemble models with several specific single models can further boost the performance. Further, we compare the performance of different models on different types of matching tasks, including text vs. text, graph vs. text, audio vs. text and video vs. text. The results will help the model selection of expert finding in practice. Finally, we explore some potential future issues in expert finding research in CQA.",
"title": ""
},
{
"docid": "71aae4cbccf6d3451d35528ceca8b8a9",
"text": "We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.",
"title": ""
},
{
"docid": "f881936c1d9edaaf0cdcce10965ac034",
"text": "The bacterial CRISPR system is fast becoming the most popular genetic and epigenetic engineering tool due to its universal applicability and adaptability. The desire to deploy CRISPR-based methods in a large variety of species and contexts has created an urgent need for the development of easy, time- and cost-effective methods enabling large-scale screening approaches. Here we describe CORALINA (comprehensive gRNA library generation through controlled nuclease activity), a method for the generation of comprehensive gRNA libraries for CRISPR-based screens. CORALINA gRNA libraries can be derived from any source of DNA without the need of complex oligonucleotide synthesis. We show the utility of CORALINA for human and mouse genomic DNA, its reproducibility in covering the most relevant genomic features including regulatory, coding and non-coding sequences and confirm the functionality of CORALINA generated gRNAs. The simplicity and cost-effectiveness make CORALINA suitable for any experimental system. The unprecedented sequence complexities obtainable with CORALINA libraries are a necessary pre-requisite for less biased large scale genomic and epigenomic screens.",
"title": ""
},
{
"docid": "bc70137062d6e9739b0956e806fb85c9",
"text": "Energy disaggregation or NILM is the best solution to reduce our consumption of electricity. Many algorithms in machine learning are applied to this field. However, the classification results from those algorithms are not as well as expected. In this paper, we propose a new approach to construct a classifier for energy disaggregation with deep learning field. We apply Gated Recurrent Unit (GRU) based on Recurrent Neural Network (RNN) to train our model using UK DALE dataset on this field. Besides, we compare our approach to original RNN on energy disaggregation. By applying GRU RRN, we achieve accuracy and F-measure for energy disaggregation with the ranges [89%–98%] and [81%–98%] respectively. Through these results of the experiment, we confirm that the deep learning approach is really effective for NILM.",
"title": ""
},
{
"docid": "f3c60d98d521ac7853bde863808e8930",
"text": "In recent years cybersecurity has gained prominence as a field of expertise and the relevant practical skills are in high demand. To reduce the cost and amount of dedicated hardware required to set up a cybersecurity lab to teach those skills, several virtualization and outsourcing approaches were developed but the resulting setup has often increased in total complexity, hampering adoption. In this paper we present a very simple (and therefore highly scalable) setup that incorporates state-of-the-art industry tools. We also describe a structured set of lab assignments developed for this setup that build one on top of the other to cover the material of a semester-long Cybersecurity course taught at Boston University. We explore alternative lab architectures, discuss other existing sets of lab assignments and present some ideas for further improvement.",
"title": ""
},
{
"docid": "2fbe9db6c676dd64c95e72e8990c63f0",
"text": "Community detection is one of the most important problems in the field of complex networks in recent years. Themajority of present algorithms only find disjoint communities, however, community often overlap to some extent in many real-world networks. In this paper, an improvedmulti-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks. Firstly, the line graph of the graph modeling the network is formed, and a spectral method is employed to extract the spectral information of the line graph. Secondly, IMOQPSO is employed to solve the multi-objective optimization problem so as to resolve the separated community structure in the line graph which corresponding to the overlapping community structure in the graph presenting the network. Finally, a fine-tuning strategy is adopted to improve the accuracy of community detection. The experiments on both synthetic and real-world networks demonstrate our method achieves cover results which fit the real situation in an even better fashion.",
"title": ""
},
{
"docid": "0949f7e3f4f1c8c1d1ff1d5b56ae8ce4",
"text": "Advancement in information and communication technology (ICT) has given rise to explosion of data in every field of operations. Working with the enormous volume of data (or Big Data, as it is popularly known as) for extraction of useful information to support decision making is one of the sources of competitive advantage for organizations today. Enterprises are leveraging the power of analytics in formulating business strategy in every facet of their operations to mitigate business risk. Volatile global market scenario has compelled the organizations to redefine their supply chain management (SCM). In this paper, we have delineated the relevance of Big Data and its importance in managing end to end supply chains for achieving business excellence. A Big Data-centric architecture for SCM has been proposed that exploits the current state of the art technology of data management, analytics and visualization. The security and privacy requirements of a Big Data system have also been highlighted and several mechanisms have been discussed to implement these features in a real world Big Data system deployment in the context of SCM. Some future scope of work has also been pointed out. Keyword: Big Data, Analytics, Cloud, Architecture, Protocols, Supply Chain Management, Security, Privacy.",
"title": ""
},
{
"docid": "9cf470291ddde91679d8250797a740d2",
"text": "Decentralized blockchains offer attractive advantages over traditional payments such as the ability to operate without a trusted authority and increased user privacy. However, the verification of blockchain payments requires the user to download and process the entire chain which can be infeasible for resource-constrained devices, such as mobile phones. To address such concerns, most major blockchain systems support lightweight clients that outsource most of the computational and storage burden to full blockchain nodes. However, such payment verification methods leak considerable information about the underlying clients, thus defeating user privacy that is considered one of the main goals of decentralized cryptocurrencies. In this paper, we propose a new approach to protect the privacy of lightweight clients in blockchain systems like Bitcoin. Our main idea is to leverage commonly available trusted execution capabilities, such as SGX enclaves. We design and implement a system called Bite where enclaves on full nodes serve privacy-preserving requests from lightweight clients. As we will show, naive serving of client requests from within SGX enclaves still leaks user information. Bite therefore integrates several privacy preservation measures that address external leakage as well as SGX side-channels. We show that the resulting solution provides strong privacy protection and at the same time improves the performance of current lightweight clients.",
"title": ""
},
{
"docid": "e98a987fce667f1bb0123448f1b08ce4",
"text": "Commonly, HoG/SVM classifier uses rectangular images for HoG feature descriptor extraction and training. This means that significant additional work has to be done to process irrelevant pixels belonging to the background surrounding the object of interest. Moreover, some areas of the foreground also can be eliminated from the processing to improve the algorithm speed and memory wise. In Boundary-Bitmap HoG approach proposed in this paper, the boundary of irregular shape of the object is represented by a bitmap to avoid processing of extra background and (partially) foreground pixels. Bitmap, derived from the training dataset, encodes those portions of an image to be used to train a classifier. Experimental results show that not only the proposed algorithm decreases the workload associated with HoG/SVM classifiers by 92.5% compared to the state-of-the-art, but also it shows an average increase about 6% in recall and a decrease about 3% in precision in comparison with standard HoG.",
"title": ""
},
{
"docid": "cd13c8d9b950c35c73aeaadd2cfa1efb",
"text": "The significant worldwide increase in observed river runoff has been tentatively attributed to the stomatal \"antitranspirant\" response of plants to rising atmospheric CO(2) [Gedney N, Cox PM, Betts RA, Boucher O, Huntingford C, Stott PA (2006) Nature 439: 835-838]. However, CO(2) also is a plant fertilizer. When allowing for the increase in foliage area that results from increasing atmospheric CO(2) levels in a global vegetation model, we find a decrease in global runoff from 1901 to 1999. This finding highlights the importance of vegetation structure feedback on the water balance of the land surface. Therefore, the elevated atmospheric CO(2) concentration does not explain the estimated increase in global runoff over the last century. In contrast, we find that changes in mean climate, as well as its variability, do contribute to the global runoff increase. Using historic land-use data, we show that land-use change plays an additional important role in controlling regional runoff values, particularly in the tropics. Land-use change has been strongest in tropical regions, and its contribution is substantially larger than that of climate change. On average, land-use change has increased global runoff by 0.08 mm/year(2) and accounts for approximately 50% of the reconstructed global runoff trend over the last century. Therefore, we emphasize the importance of land-cover change in forecasting future freshwater availability and climate.",
"title": ""
},
{
"docid": "9e439c83f4c29b870b1716ceae5aa1f3",
"text": "Suspension system plays an imperative role in retaining the continuous road wheel contact for better road holding. In this paper, fuzzy self-tuning of PID controller is designed to control of active suspension system for quarter car model. A fuzzy self-tuning is used to develop the optimal control gain for PID controller (proportional, integral, and derivative gains) to minimize suspension working space of the sprung mass and its change rate to achieve the best comfort of the driver. The results of active suspension system with fuzzy self-tuning PID controller are presented graphically and comparisons with the PID and passive system. It is found that, the effectiveness of using fuzzy self-tuning appears in the ability to tune the gain parameters of PID controller",
"title": ""
},
{
"docid": "159e040b0e74ad1b6124907c28e53daf",
"text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ",
"title": ""
},
{
"docid": "b55a314aea8914db8705cd3974c862bb",
"text": "This study examines the mediating effect of perceived usefulness on the relationship between tax service quality (correctness, response time, system support) and continuance usage intention of e-filing system in Malaysia. A total of 116 data was analysed using Partial Least Squared Method (PLS). The result showed that Perceived Usefulness has a partial mediating effect on the relationship between tax service quality (Correctness, Response Time) with the continuance usage intention and tax service quality (correctness) has significant positive relationship with continuance usage intention. Perceived usefulness was found to be the most important predictor of continuance usage intention.",
"title": ""
},
{
"docid": "7be3d69a599d39042eafbb3dc28d5b18",
"text": "The increasing pipeline depth, aggressive clock rates and execution width of modern processors require ever more accurate dynamic branch predictors to fully exploit their potential. Recent research on ahead pipelined branch predictors [11, 19] and branch predictors based on perceptrons [10, 11] have offered either increased accuracy or effective single cycle access times, at the cost of large hardware budgets and additional complexity in the branch predictor recovery mechanism. Here we show that a pipelined perceptron predictor can be constructed so that it has an effective latency of one cycle with a minimal loss of accuracy. We then introduce the concept of a precomputed local perceptron, which allows the use of both local and global history in an ahead pipelined perceptron. Both of these two techniques together allow this new perceptron predictor to match or exceed the accuracy of previous designs except at very small hardware budgets, and allow the elimination of most of the complexity in the rest of the pipeline associated with overriding predictors.",
"title": ""
},
{
"docid": "65e3890edd57a0a6de65b4e38f3cea1c",
"text": "This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an `1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of `1-analysis for such problems.",
"title": ""
}
] |
scidocsrr
|
a55072ca1513b4d10a0da94bb461ce10
|
Brain Tumor Detection Using Image Processing
|
[
{
"docid": "4bbb2191088155c823bc152fce0dec89",
"text": "Image Segmentation is an important and challenging factor in the field of medical sciences. It is widely used for the detection of tumours. This paper deals with detection of brain tumour from MR images of the brain. The brain is the anterior most part of the nervous system. Tumour is a rapid uncontrolled growth of cells. Magnetic Resonance Imaging (MRI) is the device required to diagnose brain tumour. The normal MR images are not that suitable for fine analysis, so segmentation is an important process required for efficiently analyzing the tumour images. Clustering is suitable for biomedical image segmentation as it uses unsupervised learning. This paper work uses K-Means clustering where the detected tumour shows some abnormality which is then rectified by the use of morphological operators along with basic image processing techniques to meet the goal of separating the tumour cells from the normal cells.",
"title": ""
},
{
"docid": "1bdfcf7f162bfc8c8c51a153fd4ea437",
"text": "In this paper, modified image segmentation techniques were applied on MRI scan images in order to detect brain tumors. Also in this paper, a modified Probabilistic Neural Network (PNN) model that is based on learning vector quantization (LVQ) with image and data analysis and manipulation techniques is proposed to carry out an automated brain tumor classification using MRI-scans. The assessment of the modified PNN classifier performance is measured in terms of the training performance, classification accuracies and computational time. The simulation results showed that the modified PNN gives rapid and accurate classification compared with the image processing and published conventional PNN techniques. Simulation results also showed that the proposed system out performs the corresponding PNN system presented in [30], and successfully handle the process of brain tumor classification in MRI image with 100% accuracy when the spread value is equal to 1. These results also claim that the proposed LVQ-based PNN system decreases the processing time to approximately 79% compared with the conventional PNN which makes it very promising in the field of in-vivo brain tumor detection and identification. Keywords— Probabilistic Neural Network, Edge detection, image segmentation, brain tumor detection and identification",
"title": ""
}
] |
[
{
"docid": "504776b83a292b320aaf0d0b02947d02",
"text": "The combination of unique single nucleotide polymorphisms in the CCR5 regulatory and in the CCR2 and CCR5 coding regions, defined nine CCR5 human haplogroups (HH): HHA-HHE, HHF*1, HHF*2, HHG*1, and HHG*2. Here we examined the distribution of CCR5 HH and their association with HIV infection and disease progression in 36 HIV-seronegative and 76 HIV-seropositive whites from North America and Spain [28 rapid progressors (RP) and 48 slow progressors (SP)]. Although analyses revealed that HHE frequencies were similar between HIV-seronegative and HIV-seropositive groups (25.0% vs. 32.2%, p > 0.05), HHE frequency in RP was significantly higher than that in SP (48.2% vs. 22.9%, p = 0.002). Survival analysis also showed that HHE heterozygous and homozygous were associated with an accelerated CD4 cell count decline to less than 200 cells/microL (adjusted RH 2.44, p = 0.045; adjusted RH = 3.12, p = 0.037, respectively). These data provide further evidence that CCR5 human haplogroups influence HIV-1 disease progression in HIV-infected persons.",
"title": ""
},
{
"docid": "3296ab591724b59a808ce2f43d9320ef",
"text": "We present a novel method for removing rain streaks from a single input image by decomposing it into a rain-free background layer B and a rain-streak layer R. A joint optimization process is used that alternates between removing rain-streak details from B and removing non-streak details from R. The process is assisted by three novel image priors. Observing that rain streaks typically span a narrow range of directions, we first analyze the local gradient statistics in the rain image to identify image regions that are dominated by rain streaks. From these regions, we estimate the dominant rain streak direction and extract a collection of rain-dominated patches. Next, we define two priors on the background layer B, one based on a centralized sparse representation and another based on the estimated rain direction. A third prior is defined on the rain-streak layer R, based on similarity of patches to the extracted rain patches. Both visual and quantitative comparisons demonstrate that our method outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "c337226d663e69ecde67ff6f35ba7654",
"text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.",
"title": ""
},
{
"docid": "4aa0f3a526c1ca44ab84ebd2e8fc4dc6",
"text": "Blockchain is so far well-known for its potential applications in financial and banking sectors. However, blockchain as a decentralized and distributed technology can be utilized as a powerful tool for immense daily life applications. Healthcare is one of the prominent applications area among others where blockchain is supposed to make a strong impact. It is generating wide range of opportunities and possibilities in current healthcare systems. Therefore, this paper is all about exploring the potential applications of blockchain technology in current healthcare systems and highlights the most important requirements to fulfill the need of such systems such as trustless and transparent healthcare systems. In addition, this work also presents the challenges and obstacles needed to resolve before the successful adoption of blockchain technology in healthcare systems. Furthermore, we introduce the smart contract for blockchain based healthcare systems which is key for defining the pre-defined agreements among various involved stakeholders.",
"title": ""
},
{
"docid": "bd5b8680feac7b5ff806a6a40b9f73ae",
"text": "Human variation in content selection in summarization has given rise to some fundamental research questions: How can one incorporate the observed variation in suitable evaluation measures? How can such measures reflect the fact that summaries conveying different content can be equally good and informative? In this article, we address these very questions by proposing a method for analysis of multiple human abstracts into semantic content units. Such analysis allows us not only to quantify human variation in content selection, but also to assign empirical importance weight to different content units. It serves as the basis for an evaluation method, the Pyramid Method, that incorporates the observed variation and is predictive of different equally informative summaries. We discuss the reliability of content unit annotation, the properties of Pyramid scores, and their correlation with other evaluation methods.",
"title": ""
},
{
"docid": "b6e15d3931080de9a8f92d5b6e4c19e0",
"text": "A low-profile, electrically small antenna with omnidirectional vertically polarized radiation similar to a short monopole antenna is presented. The antenna features less than lambda/40 dimension in height and lambda/10 or smaller in lateral dimension. The antenna is matched to a 50 Omega coaxial line without the need for external matching. The geometry of the antenna is derived from a quarter-wave transmission line resonator fed at an appropriate location to maximize current through the short-circuited end. To improve radiation from the vertical short-circuited pin, the geometry is further modified through superposition of additional resonators placed in a parallel arrangement. The lateral dimension of the antenna is miniaturized by meandering and turning the microstrip lines into form of a multi-arm spiral. The meandering between the short-circuited end and the feed point also facilitates the impedance matching. Through this technique, spurious horizontally polarized radiation is also minimized and a radiation pattern similar to a short dipole is achieved. The antenna is designed, fabricated and measured. Parametric studies are performed to explore further size reduction and performance improvements. Based on the studies, a dual-band antenna with enhanced gain is realized. The measurements verify that the proposed fabricated antennas feature excellent impedance match, omnidirectional radiation in the horizontal plane and low levels of cross-polarization.",
"title": ""
},
{
"docid": "741f73818da4399924daac8e96ded51c",
"text": "Purpose – The purpose of this paper is to look at how knowledge management (KM) has entered into a new phase where consolidation and harmonisation of concepts is required. Some first standards have been published in Europe and Australia in order to foster a common understanding of terms and concepts. The aim of this study was to analyse KM frameworks from research and practice regarding their model elements and try to discover differences and correspondences. Design/methodology/approach – A total of 160 KM frameworks from science, practice, associations and standardization bodies have been collected worldwide. These frameworks have been analysed regarding the use and understanding of the term knowledge, the terms used to describe the knowledge process activities and the factors influencing the success of knowledge management. Quantitative and qualitative content analysis methods have been applied. Findings – The result shows that despite the wide range of terms used in the KM frameworks an underlying consensus was detected regarding the basic categories used to describe the knowledge management activities and the critical success factors of KM. Nevertheless regarding the core term knowledge there is still a need to develop an improved understanding in research and practice. Originality/value – The first quantitative and qualitative analysis of 160 KM frameworks from different origin worldwide.",
"title": ""
},
{
"docid": "78ce06926ea3b2012277755f0916fbb7",
"text": "We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because \"those who cannot remember the past are condemned to repeat it.\" This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.",
"title": ""
},
{
"docid": "a85496dc96f87ba4f0018ef8bb2c8695",
"text": "The negative capacitance (NC) of ferroelectric materials has paved the way for achieving sub-60-mV/decade switching feature in complementary metal-oxide-semiconductor (CMOS) field-effect transistors, by simply inserting a ferroelectric thin layer in the gate stack. However, in order to utilize the ferroelectric capacitor (as a breakthrough technique to overcome the Boltzmann limit of the device using thermionic emission process), the thickness of the ferroelectric layer should be scaled down to sub-10-nm for ease of integration with conventional CMOS logic devices. In this paper, we demonstrate an NC fin-shaped field-effect transistor (FinFET) with a 6-nm-thick HfZrO ferroelectric capacitor. The performance parameters of NC FinFET such as on-/off-state currents and subthreshold slope are compared with those of the conventional FinFET. Furthermore, a repetitive and reliable steep switching feature of the NC FinFET at various drain voltages is demonstrated.",
"title": ""
},
{
"docid": "413c4d1115e8042cce44308583649279",
"text": "With the growing popularity of microblogging services such as Twitter in recent years, an increasing number of users are using these services in their daily lives. The huge volume of information generated by users raises new opportunities in various applications and areas. Inferring user interests plays a significant role in providing personalized recommendations on microblogging services, and also on third-party applications providing social logins via these services, especially in cold-start situations. In this survey, we review user modeling strategies with respect to inferring user interests from previous studies. To this end, we focus on four dimensions of inferring user interest profiles: (1) data collection, (2) representation of user interest profiles, (3) construction and enhancement of user interest profiles, and (4) the evaluation of the constructed profiles. Through this survey, we aim to provide an overview of state-of-the-art user modeling strategies for inferring user interest profiles on microblogging social networks with respect to the four dimensions. For each dimension, we review and summarize previous studies based on specified criteria. Finally, we discuss some challenges and opportunities for future work in this research domain.",
"title": ""
},
{
"docid": "f30e54728a10e416d61996c082197f5b",
"text": "This paper describes an efficient and straightforward methodology for OCR-ing and post-correcting Arabic text material on Islamic embryology collected for the COBHUNI project. As the target texts of the project include diverse diachronic stages of the Arabic language, the team of annotators for performing the OCR post-correction requires well-trained experts on language skills. While technical skills are also desirable, highly trained language experts typically lack enough technical knowledge. Furthermore, a relatively small portion of the target texts needed to be OCR-ed, as most of the material was already on some digital form. Thus, the OCR task could only require a small amount of resources in terms of time and work complexity. Both the low technical skills of the annotators and the resource constraints made it necessary for us to find an easy-to-develop and suitable workflow for performing the OCR and post-correction tasks. For the OCR phase, we chose Tesseract Open Source OCR Engine, because it achieves state-of-the-art levels of accuracy. For the post-correction phase, we decided to use the Proofread Page extension of the MediaWiki software, as it strikes a perfect balance between usability and efficiency. The post-correction task was additionally supported by the implementation of an error checker based on simple heuristics. The application of this methodology resulted in the successful and fast OCR-ing and post-correction of a corpus of 36,132 tokens.",
"title": ""
},
{
"docid": "b0c62e2049ea4f8ada0d506e06adb4bb",
"text": "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"title": ""
},
{
"docid": "548fb90bf9d665e57ced0547db1477b7",
"text": "In the application of face recognition, eyeglasses could significantly degrade the recognition accuracy. A feasible method is to collect large-scale face images with eyeglasses for training deep learning methods. However, it is difficult to collect the images with and without glasses of the same identity, so that it is difficult to optimize the intra-variations caused by eyeglasses. In this paper, we propose to address this problem in a virtual synthesis manner. The high-fidelity face images with eyeglasses are synthesized based on 3D face model and 3D eyeglasses. Models based on deep learning methods are then trained on the synthesized eyeglass face dataset, achieving better performance than previous ones. Experiments on the real face database validate the effectiveness of our synthesized data for improving eyeglass face recognition performance.",
"title": ""
},
{
"docid": "a7fe7068ce05260603ca697a8e5e8410",
"text": "In this paper, we will introduce our newly developed 3D simulation system for miniature unmanned aerial vehicles (UAVs) navigation and control in GPS-denied environments. As we know, simulation technologies can verify the algorithms and identify potential problems before the actual flight test and to make the physical implementation smoothly and successfully. To enhance the capability of state-of-the-art of research-oriented UAV simulation system, we develop a 3D simulator based on robot operation system (ROS) and a game engine, Unity3D. Unity3D has powerful graphics and can support high-fidelity 3D environments and sensor modeling which is important when we simulate sensing technologies in cluttered and harsh environments. On the other hand, ROS can provide clear software structure and simultaneous operation between hardware devices for actual UAVs. By developing data transmitting interface and necessary sensor modeling techniques, we have successfully glued ROS and Unity together. The integrated simulator can handle real-time multi-UAV navigation and control algorithms, including online processing of a large number of sensor data.",
"title": ""
},
{
"docid": "0165273958cc8385d371024e89f87d15",
"text": "Traditional, persistent data-oriented approaches in computer forensics face some limitations regarding a number of technological developments, e.g., rapidly increasing storage capabilities of hard drives, memory-resident malicious software applications, or the growing use of encryption routines, that make an in-time investigation more and more difficult. In order to cope with these issues, security professionals have started to examine alternative data sources and emphasize the value of volatile system information in RAM more recently. In this paper, we give an overview of the prevailing techniques and methods to collect and analyze a computer's memory. We describe the characteristics, benefits, and drawbacks of the individual solutions and outline opportunities for future research in this evolving field of IT security. Highlights Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "1859b356a614bdffbc009c365173ab1d",
"text": "Anxiety disorders are among the most common psychiatric illnesses, and acupuncture treatment is widely accepted in the clinic without the side effects seen from various medications. We designed a scalp acupuncture treatment protocol by locating two new stimulation areas. The area one is between Yintang (M-HN-3) and Shangxing (DU-23) and Shenting (DU-24), and the area two is between Taiyang (M-HN-9) and Tianchong (GB-9) and Shuaigu (GB-8). By stimulating these two areas with high-frequency continuous electric waves, remarkable immediate and long-term effects for anxiety disorders have been observed in our practice. The first case was a 70-year-old male with general anxiety disorder (GAD) and panic attacks at night. The scalp acupuncture treatment protocol was applied with electric stimulation for 45 minutes once every week. After four sessions of acupuncture treatments, the patient reported that he did not have panic attacks at night and he had no feelings of anxiety during the day. Follow-up 4 weeks later confirmed that he did not have any episodes of panic attacks and he had no anxiety during the day since his last acupuncture treatment. The second case was a 35-year-old male who was diagnosed with posttraumatic stress disorder (PTSD) with a history of providing frontline trauma care as a Combat Medics from the Iraq combat field. He also had 21 broken bones and multiple concussions from his time in the battlefield. He had symptoms of severe anxiety, insomnia, nightmares with flashbacks, irritability, and bad temper. He also had chest pain, back pain, and joint pain due to injuries. The above treatment protocol was performed with 30 minutes of electric stimulation each time in combination with body acupuncture for pain management. After weekly acupuncture treatment for the first two visits, the patient reported that he felt less anxious and that his sleep was getting better with fewer nightmares. After six sessions of acupuncture treatments, the patient completely recovered from PTSD, went back to work, and now lives a healthy and happy family life. The above cases and clinical observation show that the scalp acupuncture treatment protocol with electric stimulation has a significant clinic outcome for GAD, panic disorder and PTSD. The possible mechanism of action of scalp acupuncture on anxiety disorder may be related to overlapping modulatory effects on the cortical structures (orbitofrontal cortex [OFC]) and medial prefrontal cortex [mPFC]) and subcortical/limbic regions (amygdala and hippocampus), and biochemical effect of acupuncture through immunohistochemistry (norepinephrine, serotonin) performed directly to the brain tissue for anxiety disorders.",
"title": ""
},
{
"docid": "3655319a1d2ff7f4bc43235ba02566bd",
"text": "In high-performance systems, stencil computations play a crucial role as they appear in a variety of different fields of application, ranging from partial differential equation solving, to computer simulation of particles’ interaction, to image processing and computer vision. The computationally intensive nature of those algorithms created the need for solutions to efficiently implement them in order to save both execution time and energy. This, in combination with their regular structure, has justified their widespread study and the proposal of largely different approaches to their optimization.\n However, most of these works are focused on aggressive compile time optimization, cache locality optimization, and parallelism extraction for the multicore/multiprocessor domain, while fewer works are focused on the exploitation of custom architectures to further exploit the regular structure of Iterative Stencil Loops (ISLs), specifically with the goal of improving power efficiency.\n This work introduces a methodology to systematically design power-efficient hardware accelerators for the optimal execution of ISL algorithms on Field-programmable Gate Arrays (FPGAs). As part of the methodology, we introduce the notion of Streaming Stencil Time-step (SST), a streaming-based architecture capable of achieving both low resource usage and efficient data reuse thanks to an optimal data buffering strategy, and we introduce a technique called SSTs queuing that is capable of delivering a pseudolinear execution time speedup with constant bandwidth.\n The methodology has been validated on significant benchmarks on a Virtex-7 FPGA using the Xilinx Vivado suite. Results demonstrate how the efficient usage of the on-chip memory resources realized by an SST allows one to treat problem sizes whose implementation would otherwise not be possible via direct synthesis of the original, unmanipulated code via High-Level Synthesis (HLS). We also show how the SSTs queuing effectively ensures a pseudolinear throughput speedup while consuming constant off-chip bandwidth.",
"title": ""
},
{
"docid": "b66301704785cb8bc44ca6cb584b8806",
"text": "For many software projects, bug tracking systems play a central role in supporting collaboration between the developers and the users of the software. To better understand this collaboration and how tool support can be improved, we have quantitatively and qualitatively analysed the questions asked in a sample of 600 bug reports from the MOZILLA and ECLIPSE projects. We categorised the questions and analysed response rates and times by category and project. Our results show that the role of users goes beyond simply reporting bugs: their active and ongoing participation is important for making progress on the bugs they report. Based on the results, we suggest four ways in which bug tracking systems can be improved.",
"title": ""
},
{
"docid": "e3cce1cb8d46721da50560ffdf1a92c6",
"text": "BACKGROUND\nMinimalist shoes have gained popularity recently because it is speculated to strengthen the foot muscles and foot arches, which may help to resist injuries. However, previous studies provided limited evidence supporting the link between changes in muscle size and footwear transition. Therefore, this study sought to examine the effects of minimalist shoes on the intrinsic and extrinsic foot muscle volume in habitual shod runners. The relationship between participants' compliance with the minimalist shoes and changes in muscle õvolume was also evaluated.\n\n\nMETHODS\nTwenty habitual shod runners underwent a 6-month self-monitoring training program designed for minimalist shoe transition. Another 18 characteristics-matched shod runners were also introduced with the same program but they maintained running practice with standard shoes. Runners were monitored using an online surveillance platform during the program. We measured overall intrinsic and extrinsic foot muscle volume before and after the program using MRI scans.\n\n\nFINDINGS\nRunners in the experimental group exhibited significantly larger leg (P=0.01, Cohen's d=0.62) and foot (P<0.01, Cohen's d=0.54) muscle after transition. Foot muscle growth was mainly contributed by the forefoot (P<0.01, Cohen's d=0.64) but not the rearfoot muscle (P=0.10, Cohen's d=0.30). Leg and foot muscle volume of runners in the control group remained similar after the program (P=0.33-0.95). A significant positive correlation was found between participants' compliance with the minimalist shoes and changes in leg muscle volume (r=0.51; P=0.02).\n\n\nINTERPRETATION\nHabitual shod runners who transitioned to minimalist shoes demonstrated significant increase in leg and foot muscle volume. Additionally, the increase in leg muscle volume was significantly correlated associated with the compliance of minimalist shoe use.",
"title": ""
},
{
"docid": "7568cb435d0211248e431d865b6a477e",
"text": "We propose prosody embeddings for emotional and expressive speech synthesis networks. The proposed methods introduce temporal structures in the embedding networks, thus enabling fine-grained control of the speaking style of the synthesized speech. The temporal structures can be designed either on the speech side or the text side, leading to different control resolutions in time. The prosody embedding networks are plugged into end-to-end speech synthesis networks and trained without any other supervision except for the target speech for synthesizing. It is demonstrated that the prosody embedding networks learned to extract prosodic features. By adjusting the learned prosody features, we could change the pitch and amplitude of the synthesized speech both at the frame level and the phoneme level. We also introduce the temporal normalization of prosody embeddings, which shows better robustness against speaker perturbations during prosody transfer tasks.",
"title": ""
}
] |
scidocsrr
|
6a85ae55305bb0c330a82457f5994f53
|
Control parameter optimization for a microgrid system using particle swarm optimization
|
[
{
"docid": "6af7f70f0c9b752d3dbbe701cb9ede2a",
"text": "This paper addresses real and reactive power management strategies of electronically interfaced distributed generation (DG) units in the context of a multiple-DG microgrid system. The emphasis is primarily on electronically interfaced DG (EI-DG) units. DG controls and power management strategies are based on locally measured signals without communications. Based on the reactive power controls adopted, three power management strategies are identified and investigated. These strategies are based on 1) voltage-droop characteristic, 2) voltage regulation, and 3) load reactive power compensation. The real power of each DG unit is controlled based on a frequency-droop characteristic and a complimentary frequency restoration strategy. A systematic approach to develop a small-signal dynamic model of a multiple-DG microgrid, including real and reactive power management strategies, is also presented. The microgrid eigen structure, based on the developed model, is used to 1) investigate the microgrid dynamic behavior, 2) select control parameters of DG units, and 3) incorporate power management strategies in the DG controllers. The model is also used to investigate sensitivity of the design to changes of parameters and operating point and to optimize performance of the microgrid system. The results are used to discuss applications of the proposed power management strategies under various microgrid operating conditions",
"title": ""
},
{
"docid": "56b58efbeab10fa95e0f16ad5924b9e5",
"text": "This paper investigates (i) preplanned switching events and (ii) fault events that lead to islanding of a distribution subsystem and formation of a micro-grid. The micro-grid includes two distributed generation (DG) units. One unit is a conventional rotating synchronous machine and the other is interfaced through a power electronic converter. The interface converter of the latter unit is equipped with independent real and reactive power control to minimize islanding transients and maintain both angle stability and voltage quality within the micro-grid. The studies are performed based on a digital computer simulation approach using the PSCAD/EMTDC software package. The studies show that an appropriate control strategy for the power electronically interfaced DG unit can ensure stability of the micro-grid and maintain voltage quality at designated buses, even during islanding transients. This paper concludes that presence of an electronically-interfaced DG unit makes the concept of micro-grid a technically viable option for further investigations.",
"title": ""
},
{
"docid": "a5911891697a1b2a407f231cf0ad6c28",
"text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.",
"title": ""
}
] |
[
{
"docid": "ce7fdc16d6d909a4e0c3294ed55af51d",
"text": "In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. We show that, without any language model, Seq2Seq and RNN-Transducer models both outperform the best reported CTC models with a language model, on the popular Hub5'00 benchmark. On our internal diverse dataset, these trends continue — RNN-Transducer models rescored with a language model after beam search outperform our best CTC models. These results simplify the speech recognition pipeline so that decoding can now be expressed purely as neural network operations. We also study how the choice of encoder architecture affects the performance of the three models — when all encoder layers are forward only, and when encoders downsample the input representation aggressively.",
"title": ""
},
{
"docid": "ba19b5bc7aabecf8b8947cfa07b47237",
"text": "We consider the problem of sparse coding, where each sample consists of a sparse linear combination of a set of dictionary atoms, and the task is to learn both the dictionary elements and the mixing coefficients. Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed. Typically, the coefficients are estimated via l1 minimization, keeping the dictionary fixed, and the dictionary is estimated through least squares, keeping the coefficients fixed. In this paper, we establish local linear convergence for this variant of alternating minimization and establish that the basin of attraction for the global optimum (corresponding to the true dictionary and the coefficients) is O ( 1/s ) , where s is the sparsity level in each sample and the dictionary satisfies RIP. Combined with the recent results of approximate dictionary estimation, this yields provable guarantees for exact recovery of both the dictionary elements and the coefficients, when the dictionary elements are incoherent.",
"title": ""
},
{
"docid": "6db439b2753b9b6b8a298292410ca6f6",
"text": "MOTIVATION\nMost existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive.\n\n\nRESULTS\nComparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature.\n\n\nAVAILABILITY\nSource code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease.",
"title": ""
},
{
"docid": "b3f90386a9ef3bffeb618ab9304ee482",
"text": "The diagnosis process is often challenging, it involves the correlation of various pieces of information followed by several possible conclusions and iterations of diseases that may overload physicians when facing urgent cases that may lead to bad consequences threatening people's lives. The physician is asked to search for all symptoms related to a specific disease. To make this kind of search possible, there is a strong need for an effective way to store and retrieve medical knowledge from various datasets in order to find links between human disease and symptoms. For this purpose, we propose in this work a new Disease-Symptom Ontology (DS-Ontology). Utilizing existing biomedical ontologies, we integrate all available disease-symptom relationships to create a DS-Ontology that will be used latter in an ontology-based Clinical Decision Support System to determine a highly effective medical diagnosis.",
"title": ""
},
{
"docid": "d9fb3ab87d8050ec5957f9747dc1980d",
"text": "Maximally Stable Extremal Regions (MSERs) have achieved great success in scene text detection. However, this low-level pixel operation inherently limits its capability for handling complex text information efficiently (e. g. connections between text or background components), leading to the difficulty in distinguishing texts from background components. In this paper, we propose a novel framework to tackle this problem by leveraging the high capability of convolutional neural network (CNN). In contrast to recent methods using a set of low-level heuristic features, the CNN network is capable of learning high-level features to robustly identify text components from text-like outliers (e.g. bikes, windows, or leaves). Our approach takes advantages of both MSERs and slidingwindow based methods. The MSERs operator dramatically reduces the number of windows scanned and enhances detection of the low-quality texts. While the sliding-window with CNN is applied to correctly separate the connections of multiple characters in components. The proposed system achieved strong robustness against a number of extreme text variations and serious real-world problems. It was evaluated on the ICDAR 2011 benchmark dataset, and achieved over 78% in F-measure, which is significantly higher than previous methods.",
"title": ""
},
{
"docid": "e9474d646b9da5e611475f4cdfdfc30e",
"text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.",
"title": ""
},
{
"docid": "3ba87a9a84f317ef3fd97c79f86340c1",
"text": "Programmers often need to reason about how a program evolved between two or more program versions. Reasoning about program changes is challenging as there is a significant gap between how programmers think about changes and how existing program differencing tools represent such changes. For example, even though modification of a locking protocol is conceptually simple and systematic at a code level, diff extracts scattered text additions and deletions per file. To enable programmers to reason about program differences at a high level, this paper proposes a rule-based program differencing approach that automatically discovers and represents systematic changes as logic rules. To demonstrate the viability of this approach, we instantiated this approach at two different abstraction levels in Java: first at the level of application programming interface (API) names and signatures, and second at the level of code elements (e.g., types, methods, and fields) and structural dependences (e.g., method-calls, field-accesses, and subtyping relationships). The benefit of this approach is demonstrated through its application to several open source projects as well as a focus group study with professional software engineers from a large e-commerce company.",
"title": ""
},
{
"docid": "9409882dd0cf21ef9eddd7681811bd9f",
"text": "Recently, the Particle Swarm Optimization (PSO) technique has gained much attention in the field of time series forecasting. Although PSO trained Artificial Neural Networks (ANNs) performed reasonably well in stationary time series forecasting, their effectiveness in tracking the structure of non-stationary data (especially those which contain trends or seasonal patterns) is yet to be justified. In this paper, we have trained neural networks with two types of PSO (Trelea1 and Trelea2) for forecasting seasonal time series data. To assess their performances, experiments are conducted on three well-known real world seasonal time series. Obtained forecast errors in terms of three common performance measures, viz. MSE, MAE and MAPE for each dataset are compared with those obtained by the Seasonal ANN (SANN) model, trained with a standard backpropagation algorithm. Comparisons demonstrate that training with PSO-Trelea1 and PSO-Trelea2 produced significantly better results than the standard backpropagation rule.",
"title": ""
},
{
"docid": "1fb13cda340d685289f1863bb2bfd62b",
"text": "1 Assistant Professor, Department of Prosthodontics, Ibn-e-Siena Hospital and Research Institute, Multan Medical and Dental College, Multan, Pakistan 2 Assistant Professor, Department of Prosthodontics, College of Dentistry, King Saud University, Riyadh, Saudi Arabia 3 Head Department of Prosthodontics, Armed Forces Institute of Dentistry, Rawalpindi, Pakistan For Correspondence: Dr Salman Ahmad, House No 10, Street No 2, Gulshan Sakhi Sultan Colony, Surej Miani Road, Multan, Pakistan. Email: drsalman21@gmail.com. Cell: 0300–8732017 INTRODUCTION",
"title": ""
},
{
"docid": "2a811ac141a9c5fb0cea4b644b406234",
"text": "Leadership is a process influence between leaders and subordinates where a leader attempts to influence the behaviour of subordinates to achieve the organizational goals. Organizational success in achieving its goals and objectives depends on the leaders of the organization and their leadership styles. By adopting the appropriate leadership styles, leaders can affect employee job satisfaction, commitment and productivity. Two hundred Malaysian executives working in public sectors voluntarily participated in this study. Two types of leadership styles, namely, transactional and transformational were found to have direct relationships with employees’ job satisfaction. The results showed that transformational leadership style has a stronger relationship with job satisfaction. This implies that transformational leadership is deemed suitable for managing government organizations. Implications of the findings were discussed further.",
"title": ""
},
{
"docid": "2d43992a8eb6e97be676c04fc9ebd8dd",
"text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.",
"title": ""
},
{
"docid": "066d3a381ffdb2492230bee14be56710",
"text": "The third generation partnership project released its first 5G security specifications in March 2018. This paper reviews the proposed security architecture and its main requirements and procedures and evaluates them in the context of known and new protocol exploits. Although security has been improved from previous generations, our analysis identifies potentially unrealistic 5G system assumptions and protocol edge cases that can render 5G communication systems vulnerable to adversarial attacks. For example, null encryption and null authentication are still supported and can be used in valid system configurations. With no clear proposal to tackle pre-authentication message-based exploits, mobile devices continue to implicitly trust any serving network, which may or may not enforce a number of optional security features, or which may not be legitimate. Moreover, several critical security and key management functions are considered beyond the scope of the specifications. The comparison with known 4G long-term evolution protocol exploits reveals that the 5G security specifications, as of Release 15, Version 1.0.0, do not fully address the user privacy and network availability challenges.",
"title": ""
},
{
"docid": "900a33dd42a9e55e1c00216a621daa33",
"text": "There is a current trend to support pet health through the addition of natural supplements to their diet, taking into account the high incidence of medical conditions related to their immune system and gastrointestinal tract. This study investigates effects of the plant Eleutherococcus senticosus as a dietary additive on faecal microbiota, faecal characteristics, blood serum biochemistry and selected parameters of cellular immunity in healthy dogs. A combination of the plant with the canine-derived probiotic strain Lactobacillus fermentum CCM 7421 was also evaluated. Thirty-two dogs were devided into 4 treatment groups; receiving no additive (control), dry root extract of E. senticosus (8 mg/kg of body weight), probiotic strain (108 CFU/mL, 0.1 mL/kg bw) and the combination of both additives. The trial lasted 49 days with 14 days supplementation period. Results confirm no antimicrobial effect of the plant on the probiotic abundance either in vitro (cultivation test) or in vivo. The numbers of clostridia, lactic acid bacteria and Gram-negative bacteria as well as the concentration of serum total protein, triglyceride, glucose and aspartate aminotransferase were significantly altered according to the treatment group. Leukocyte phagocytosis was significantly stimulated by the addition of probiotic while application of plant alone led to a significant decrease.",
"title": ""
},
{
"docid": "19ee4367e4047f45b60968e3374cae7a",
"text": "BACKGROUND\nFusion zones between superficial fascia and deep fascia have been recognized by surgical anatomists since 1938. Anatomical dissection performed by the author suggested that additional superficial fascia fusion zones exist.\n\n\nOBJECTIVES\nA study was performed to evaluate and define fusion zones between the superficial and the deep fascia.\n\n\nMETHODS\nDissection of fresh and minimally preserved cadavers was performed using the accepted technique for defining anatomic spaces: dye injection combined with cross-sectional anatomical dissection.\n\n\nRESULTS\nThis study identified bilaminar membranes traveling from deep to superficial fascia at consistent locations in all specimens. These membranes exist as fusion zones between superficial and deep fascia, and are referred to as SMAS fusion zones.\n\n\nCONCLUSIONS\nNerves, blood vessels and lymphatics transition between the deep and superficial fascia of the face by traveling along and within these membranes, a construct that provides stability and minimizes shear. Bilaminar subfascial membranes continue into the subcutaneous tissues as unilaminar septa on their way to skin. This three-dimensional lattice of interlocking horizontal, vertical, and oblique membranes defines the anatomic boundaries of the fascial spaces as well as the deep and superficial fat compartments of the face. This information facilitates accurate volume augmentation; helps to avoid facial nerve injury; and provides the conceptual basis for understanding jowls as a manifestation of enlargement of the buccal space that occurs with age.",
"title": ""
},
{
"docid": "9343521f74c244255ee6340b33947427",
"text": "Using a community sample of 192 adult women who had been sexually abused during childhood, the present study tested the hypothesis that perceived stigma, betrayal, powerlessness, and self-blame mediate the long-term effects of child sexual abuse. A path analysis indicated that the level of psychological distress currently experienced by adult women who had been sexually abused in childhood was mediated by feelings of stigma and self-blame. This result provides partial support for Finkelhor and Browne's (1985) traumagenic dynamics model of child sexual abuse. The limitations of the study are discussed.",
"title": ""
},
{
"docid": "166e615188d168d89fcd091871727344",
"text": "Two methods are analyzed for inertially stabilizing the pointing vector defining the line of sight (LOS) of a two-axis gimbaled laser tracker. Mounting the angular rate and acceleration sensors directly on the LOS axes is often used for precision pointing applications. This configuration impacts gimbal size, and the sensors must be capable of withstanding high angular slew rates. With the other stabilization method, sensors are mounted on the gimbal base, which alleviates some issues with the direct approach but may be less efficient, since disturbances are not measured in the LOS coordinate frame. This paper investigates the impact of LOS disturbances and sensor noise on the performance of each stabilization control loop configuration. It provides a detailed analysis of the mechanisms by which disturbances are coupled to the LOS track vector for each approach, and describes the advantages and disadvantages of each. It concludes with a performance comparison based upon simulated sensor noise and three sets of platform disturbance inputs ranging from mild to harsh disturbance environments.",
"title": ""
},
{
"docid": "40cd4d0863ed757709530af59e928e3b",
"text": "Kynurenic acid (KYNA) is an endogenous antagonist of ionotropic glutamate receptors and the α7 nicotinic acetylcholine receptor, showing anticonvulsant and neuroprotective activity. In this study, the presence of KYNA in food and honeybee products was investigated. KYNA was found in all 37 tested samples of food and honeybee products. The highest concentration of KYNA was obtained from honeybee products’ samples, propolis (9.6 nmol/g), honey (1.0–4.8 nmol/g) and bee pollen (3.4 nmol/g). A high concentration was detected in fresh broccoli (2.2 nmol/g) and potato (0.7 nmol/g). Only traces of KYNA were found in some commercial baby products. KYNA administered intragastrically in rats was absorbed from the intestine into the blood stream and transported to the liver and to the kidney. In conclusion, we provide evidence that KYNA is a constituent of food and that it can be easily absorbed from the digestive system.",
"title": ""
},
{
"docid": "232891b57ea0ca1852fbe3e63157db26",
"text": "With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of securitycritical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.",
"title": ""
},
{
"docid": "5fde7006ec6f7cf4f945b234157e5791",
"text": "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.",
"title": ""
}
] |
scidocsrr
|
61249496249b2993b16d3f18d92b74b6
|
Is UTAUT really used or just cited for the sake of it? a systematic review of citations of UTAUT's originating article
|
[
{
"docid": "714df72467bc3e919b7ea7424883cf26",
"text": "Although a lot of attention has been paid to software cost estimation since 1960, making accurate effort and schedule estimation is still a challenge. To collect evidence and identify potential areas of improvement in software cost estimation, it is important to investigate the estimation accuracy, the estimation method used, and the factors influencing the adoption of estimation methods in current industry. This paper analyzed 112 projects from the Chinese software project benchmarking dataset and conducted questionnaire survey on 116 organizations to investigate the above information. The paper presents the current situations related to software project estimation in China and provides evidence-based suggestions on how to improve software project estimation. Our survey results suggest, e.g., that large projects were more prone to cost and schedule overruns, that most computing managers and professionals were neither satisfied nor dissatisfied with the project estimation, that very few organizations (15%) used model-based methods, and that the high adoption cost and insignificant benefit after adoption were the main causes for low use of model-based methods.",
"title": ""
}
] |
[
{
"docid": "e4d1a0be0889aba00b80a2d6cdc2335b",
"text": "This study uses a multi-period structural model developed by Chen and Yeh (2006), which extends the Geske-Johnson (1987) compound option model to evaluate the performance of capital structure arbitrage under a multi-period debt structure. Previous studies exploring capital structure arbitrage have typically employed single-period structural models, which have very limited empirical scopes. In this paper, we predict the default situations of a firm using the multi-period Geske-Johnson model that assumes endogenous default barriers. The Geske-Johnson model is the only model that accounts for the entire debt structure and imputes the default barrier to the asset value of the firm. This study also establishes trading strategies and analyzes the arbitrage performance of 369 North American obligators from 2004 to 2008. Comparing the performance of capital structure arbitrage between the Geske-Johnson and CreditGrades models, we find that the extended Geske-Johnson model is more suitable than the CreditGrades model for exploiting the mispricing between equity prices and credit default swap spreads.",
"title": ""
},
{
"docid": "4d8573fa52e325e2a058f6c49698dd26",
"text": "Running applications in the cloud efficiently requires much more than deploying software in virtual machines. Cloud applications have to be continuously managed: (1) to adjust their resources to the incoming load and (2) to face transient failures replicating and restarting components to provide resiliency on unreliable infrastructure. Continuous managementmonitors application and infrastructural metrics to provide automated and responsive reactions to failures (healthmanagement) and changing environmental conditions (auto-scaling) minimizing human intervention. In the current practice, management functionalities are provided as infrastructural or third party services. In both cases they are external to the application deployment. We claim that this approach has intrinsic limits, namely that separating management functionalities from the application prevents them from naturally scaling with the application and requires additional management code and human intervention. Moreover, using infrastructure provider services for management functionalities results in vendor lock-in effectively preventing cloud applications to adapt and run on the most effective cloud for the job. In this paper we discuss the main characteristics of cloud native applications, propose a novel architecture that enables scalable and resilient self-managing applications in the cloud, and relate on our experience in porting a legacy application to the cloud applying cloud-native principles. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1f3e960c9e73e8fcc3307824cf2d0317",
"text": "With the development of the integration between mobile communication and Internet technology, China is expected to have a large number of M-payment users due to its population size with a large number of mobile users. However, the number of M-payment users in China is still low and currently there are limited in-depth studies exploring the adoption of M-payment in China. This study aims to explore reasons for individuals to use M-payment in China through a qualitative study. The research results indicated that M-payment adoption was influenced by various reasons related to system quality, service quality, usefulness, social influence, trust, among others. The study findings indicate that the influence of system quality and service quality on individual’s decision to use in China appear to be the most important. A particular individual lifestyle, need and promotion offered by service providers have also been identified as important reasons for using M-payment in China. The outcomes of this study enhance the current knowledge about the M-payment adoption particularly in China. They can also be used by service providers to devise appropriate strategies to encourage wider adoption of M-payment.",
"title": ""
},
{
"docid": "23ffdf5e7797e7f01c6d57f1e5546026",
"text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.",
"title": ""
},
{
"docid": "22361b6c10c3580bc082d819d8128d22",
"text": "In this paper, we discuss the method of Bayesian regression and its efficacy for predicting price variation of Bitcoin, a recently popularized virtual, cryptographic currency. Bayesian regression refers to utilizing empirical data as proxy to perform Bayesian inference. We utilize Bayesian regression for the so-called “latent source model”. The Bayesian regression for “latent source model” was introduced and discussed by Chen, Nikolov and Shah [1] and Bresler, Chen and Shah [2] for the purpose of binary classification. They established theoretical as well as empirical efficacy of the method for the setting of binary classification. In this paper, instead we utilize it for predicting real-valued quantity, the price of Bitcoin. Based on this price prediction method, we devise a simple strategy for trading Bitcoin. The strategy is able to nearly double the investment in less than 60 day period when run against real data trace.",
"title": ""
},
{
"docid": "a1d061eb47e1404d2160c5e830229dc1",
"text": "Recommendation techniques are very important in the fields of E-commerce and other web-based services. One of the main difficulties is dynamically providing high-quality recommendation on sparse data. In this paper, a novel dynamic personalized recommendation algorithm is proposed, in which information contained in both ratings and profile contents are utilized by exploring latent relations between ratings, a set of dynamic features are designed to describe user preferences in multiple phases, and finally, a recommendation is made by adaptively weighting the features. Experimental results on public data sets show that the proposed algorithm has satisfying performance.",
"title": ""
},
{
"docid": "a12c4e820254b07f322727affe23cb9d",
"text": "Attributed network embedding has been widely used in modeling real-world systems. The obtained low-dimensional vector representations of nodes preserve their proximity in terms of both network topology and node attributes, upon which different analysis algorithms can be applied. Recent advances in explanation-based learning and human-in-the-loop models show that by involving experts, the performance of many learning tasks can be enhanced. It is because experts have a better cognition in the latent information such as domain knowledge, conventions, and hidden relations. It motivates us to employ experts to transform their meaningful cognition into concrete data to advance network embedding. However, learning and incorporating the expert cognition into the embedding remains a challenging task. Because expert cognition does not have a concrete form, and is difficult to be measured and laborious to obtain. Also, in a real-world network, there are various types of expert cognition such as the comprehension of word meaning and the discernment of similar nodes. It is nontrivial to identify the types that could lead to a significant improvement in the embedding. In this paper, we study a novel problem of exploring expert cognition for attributed network embedding and propose a principled framework NEEC. We formulate the process of learning expert cognition as a task of asking experts a number of concise and general queries. Guided by the exemplar theory and prototype theory in cognitive science, the queries are systematically selected and can be generalized to various real-world networks. The returned answers from the experts contain their valuable cognition. We model them as new edges and directly add into the attributed network, upon which different embedding methods can be applied towards a more informative embedding representation. Experiments on real-world datasets verify the effectiveness and efficiency of NEEC.",
"title": ""
},
{
"docid": "f6463026a75a981c22e00a98990a095a",
"text": "Thanks to their anonymity (pseudonymity) and elimination of trusted intermediaries, cryptocurrencies such as Bitcoin have created or stimulated growth in many businesses and communities. Unfortunately, some of these are criminal, e.g., money laundering, illicit marketplaces, and ransomware. Next-generation cryptocurrencies such as Ethereum will include rich scripting languages in support of smart contracts, programs that autonomously intermediate transactions. In this paper, we explore the risk of smart contracts fueling new criminal ecosystems. Specifically, we show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).\n We show that CSCs for leakage of secrets (a la Wikileaks) are efficiently realizable in existing scripting languages such as that in Ethereum. We show that CSCs for theft of cryptographic keys can be achieved using primitives, such as Succinct Non-interactive ARguments of Knowledge (SNARKs), that are already expressible in these languages and for which efficient supporting language extensions are anticipated. We show similarly that authenticated data feeds, an emerging feature of smart contract systems, can facilitate CSCs for real-world crimes (e.g., property crimes).\n Our results highlight the urgency of creating policy and technical safeguards against CSCs in order to realize the promise of smart contracts for beneficial goals.",
"title": ""
},
{
"docid": "812f7807a3d05aa2a65acff1dd5d87d3",
"text": "In this paper we present a novel framework for geolocalizing Unmanned Aerial Vehicles (UAVs) using only their onboard camera. The framework exploits the abundance of satellite imagery, along with established computer vision and deep learning methods, to locate the UAV in a satellite imagery map. It utilizes the contextual information extracted from the scene to attain increased geolocalization accuracy and enable navigation without the use of a Global Positioning System (GPS), which is advantageous in GPS-denied environments and provides additional enhancement to existing GPS-based systems. The framework inputs two images at a time, one captured using a UAV-mounted downlooking camera, and the other synthetically generated from the satellite map based on the UAV location within the map. Local features are extracted and used to register both images, a process that is performed recurrently to relate UAV motion to its actual map position, hence performing preliminary localization. A semantic shape matching algorithm is subsequently applied to extract and match meaningful shape information from both images, and use this information to improve localization accuracy. The framework is evaluated on two different datasets representing different geographical regions. Obtained results demonstrate the viability of proposed method and that the utilization of visual information can offer a promising approach for unconstrained UAV navigation and enable the aerial platform to be self-aware of its surroundings thus opening up new application domains or enhancing existing ones.",
"title": ""
},
{
"docid": "bd53dea475e4ddecf40ebf31a225f0c2",
"text": "Business process management is multidimensional tool which utilizes several methods to examine processes from a holistic perspective, transcending the narrow borders of specific functions. It undertakes fundamental reconsideration and radical redesign of organizational processes in order to achieve drastic improvement of current performance in terms of cost, service and speed. Business process management tries to encourage a radical change rather than an incremental change. An analytical approach has been applied for the current study. For this study, the case of Bank X, which is a leading public sector bank operating in the state, has been taken into consideration. A sample of 250 customers was selected randomly from Alwar, Dausa and Bharatpur districts. For policy framework, corporate headquarters were consulted. For the research a self-designed survey instrument, looking for information from the customers on several parameters like cost, quality, services and performance, was used. This article tries to take a critical account of existent business process management in Bank X and to study the relationship between business process management and organizational performance. The data has been tested by correlation analysis. The findings of the study show that business process management exists in the Bank X and there is a significant relationship between business process management and organizational performance. Keywords-Business Process Management; Business Process Reengineering; Organizational Performance",
"title": ""
},
{
"docid": "63a9ff660f9d6192c1633f5fca0bc28c",
"text": "The natural world provides numerous cases for inspiration in engineering design. Biological organisms, phenomena, and strategies, which we refer to as biological systems, provide a rich set of analogies. These systems provide insight into sustainable and adaptable design and offer engineers billions of years of valuable experience, which can be used to inspire engineering innovation. This research presents a general method for functionally representing biological systems through systematic design techniques, leading to the conceptualization of biologically inspired engineering designs. Functional representation and abstraction techniques are used to translate biological systems into an engineering context. The goal is to make the biological information accessible to engineering designers who possess varying levels of biological knowledge but have a common understanding of engineering design. Creative or novel engineering designs may then be discovered through connections made between biology and engineering. To assist with making connections between the two domains concept generation techniques that use biological information, engineering knowledge, and automatic concept generation software are employed. Two concept generation approaches are presented that use a biological model to discover corresponding engineering components that mimic the biological system and use a repository of engineering and biological information to discover which biological components inspire functional solutions to fulfill engineering requirements. Discussion includes general guidelines for modeling biological systems at varying levels of fidelity, advantages, limitations, and applications of this research. The modeling methodology and the first approach for concept generation are illustrated by a continuous example of lichen.",
"title": ""
},
{
"docid": "e49ea1a6aa8d7ffec9ca16ac18cfc43a",
"text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work marries the two and proposes a method for representing generic objects as quadrics which allows object detections to be seamlessly integrated in a SLAM framework. For scene coverage, additional dominant planar structures are modeled as infinite planes. Experiments show that the proposed points-planes-quadrics representation can easily incorporate Manhattan and object affordance constraints, greatly improving camera localization and leading to semantically meaningful maps. The performance of our SLAM system is demonstrated in https://youtu.be/dR-rB9keF8M.",
"title": ""
},
{
"docid": "4489ab0d4d1c29c5f72a67042250468a",
"text": "This paper adopts a holistic approach to explain why social capital matters for effective implementation, widespread uptake, greater social inclusion, and the sustainability of CI initiatives. It describes a theoretical framework drawn from diffusion of innovation, community development and social capital theories. The framework emphasises the interplay between physical infrastructure (including hard technologies and their location in the community), soft technologies (including capacity building, education, training and awareness raising), social infrastructure (including local networks and community organisations) and social capital (including trust and reciprocity, strong sense of community, shared vision, and outcomes from participation in local and external networks).",
"title": ""
},
{
"docid": "4ac06b70fc02c83cb676f5c479a0fe93",
"text": "We propose a framework that captures the denotational probabilities of words and phrases by embedding them in a vector space, and present a method to induce such an embedding from a dataset of denotational probabilities. We show that our model successfully predicts denotational probabilities for unseen phrases, and that its predictions are useful for textual entailment datasets such as SICK and SNLI.",
"title": ""
},
{
"docid": "02b6bcef39a21b14ce327f3dc9671fef",
"text": "We've all heard tales of multimillion dollar mistakes that somehow ran off course. Are software projects that risky or do managers need to take a fresh approach when preparing for such critical expeditions? Software projects are notoriously difficult to manage and too many of them end in failure. In 1995, annual U.S. spending on software projects reached approximately $250 billion and encompassed an estimated 175,000 projects [6]. Despite the costs involved, press reports suggest that project failures are occurring with alarming frequency. In 1995, U.S companies alone spent an estimated $59 billion in cost overruns on IS projects and another $81 billion on canceled software projects [6]. One explanation for the high failure rate is that managers are not taking prudent measures to assess and manage the risks involved in these projects. is Advocates of software project risk management claim that by countering these threats to success, the incidence of failure can be reduced [4, 5]. Before we can develop meaningful risk management strategies, however, we must identify these risks. Furthermore, the relative importance of these risks needs to be established, along with some understanding as to why certain risks are perceived to be more important than others. This is necessary so that managerial attention can be focused on the areas that constitute the greatest threats. Finally, identified risks must be classified in a way that suggests meaningful risk mitigation strategies. Here, we report the results of a Delphi study in which experienced software project managers identified and ranked the most important risks. The study led not only to the identification of risk factors and their relative importance, but also to novel insights into why project managers might view certain risks as being more important than others. Based on these insights, we introduce a framework for classifying software project risks and discuss appropriate strategies for managing each type of risk. Since the 1970s, both academics and practitioners have written about risks associated with managing software projects [1, 2, 4, 5, 7, 8]. Unfortunately , much of what has been written on risk is based either on anecdotal evidence or on studies limited to a narrow portion of the development process. Moreover, no systematic attempts have been made to identify software project risks by tapping the opinions of those who actually have experience in managing such projects. With a few exceptions [3, 8], there has been little attempt to understand the …",
"title": ""
},
{
"docid": "08196718e17bfcdcecea60b0fb735638",
"text": "Atari games are an excellent testbed for studying intelligent behavior, as they offer a range of tasks that differ widely in their visual representation, game dynamics, and goals presented to an agent. The last two years have seen a spate of research into artificial agents that use a single algorithm to learn to play these games. The best of these artificial agents perform at better-than-human levels on most games, but require hundreds of hours of game-play experience to produce such behavior. Humans, on the other hand, can learn to perform well on these tasks in a matter of minutes. In this paper we present data on human learning trajectories for several Atari games, and test several hypotheses about the mechanisms that lead to such rapid learning.",
"title": ""
},
{
"docid": "0332be71a529382e82094239db31ea25",
"text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).",
"title": ""
},
{
"docid": "81667ba5e59bd04d979b2206b54b5b32",
"text": "Parallelism is an important rhetorical device. We propose a machine learning approach for automated sentence parallelism identification in student essays. We b uild an essay dataset with sentence level parallelism annotated. We derive features by combining gen eralized word alignment strategies and the alignment measures between word sequences. The experiment al r sults show that sentence parallelism can be effectively identified with a F1 score of 82% at pair-wise level and 72% at parallelism chunk l evel. Based on this approach, we automatically identify sentence parallelism in more than 2000 student essays and study the correlation between the use of sentence parall elism and the types and quality of essays.",
"title": ""
},
{
"docid": "e3ef98c0dae25c39e4000e62a348479e",
"text": "A New Framework For Hybrid Models By Coupling Latent Variables 1 User specifies p with a generative and a discriminative component and latent z p(x, y, z) = p(y|x, z) · p(x, z). The p(y|x, z), p(x, z) can be very general; they only share latent z, not parameters! 2We train both components using a multi-conditional objective α · Eq(x,y)Eq(z|x) ` (y, p(y|x, z)) } {{ } discriminative loss (`2, log) +β ·Df [q(x, z)||p(x, z)] } {{ } f-divergence (KL, JS) where q(x, y) is data distribution and α, β > 0 are hyper-parameters.",
"title": ""
},
{
"docid": "5fbdb9d5af553b5b2cb2ad54856cca9c",
"text": "A promising way to improve the programming process is increasing the declarativeness of programming, approximation to natural language on the base of accumulating and actively using knowledge. The essence of the proposed approach is representing the semantics of a program in the form of a set of concepts about actions, participants, resources and relations between them, accumulation and classification of machine-readable knowledge of computer programs in order to increase the degree of automation of programming. The novelty of the presented work is retrieving relevant precedents by the given description of resources and actions under them. The paper describes the ontological model of a program, considers the technology of programming using ontologies, gives an example of definition of the semantics of a problem, and discusses the positive aspects of the proposed approach.",
"title": ""
}
] |
scidocsrr
|
d98ef66dd6ee891bd29a2b4b0db8fcd9
|
The Forms of Bullying Scale (FBS): validity and reliability estimates for a measure of bullying victimization and perpetration in adolescence.
|
[
{
"docid": "4abceedb1f6c735a8bc91bc811ce4438",
"text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.",
"title": ""
},
{
"docid": "6d52a9877ddf18eb7e43c83000ed4da1",
"text": "Cyberbullying has recently emerged as a new form of bullying and harassment. 360 adolescents (12-20 years), were surveyed to examine the nature and extent of cyberbullying in Swedish schools. Four categories of cyberbullying (by text message, email, phone call and picture/video clip) were examined in relation to age and gender, perceived impact, telling others, and perception of adults becoming aware of such bullying. There was a significant incidence of cyberbullying in lower secondary schools, less in sixth-form colleges. Gender differences were few. The impact of cyberbullying was perceived as highly negative for picture/video clip bullying. Cybervictims most often chose to either tell their friends or no one at all about the cyberbullying, so adults may not be aware of cyberbullying, and (apart from picture/video clip bullying) this is how it was perceived by pupils. Findings are discussed in relation to similarities and differences between cyberbullying and the more traditional forms of bullying.",
"title": ""
},
{
"docid": "ff27d6a0bb65b7640ca1dbe03abc4652",
"text": "The psychometric properties of the Depression Anxiety Stress Scales (DASS) were evaluated in a normal sample of N = 717 who were also administered the Beck Depression Inventory (BDI) and the Beck Anxiety Inventory (BAI). The DASS was shown to possess satisfactory psychometric properties, and the factor structure was substantiated both by exploratory and confirmatory factor analysis. In comparison to the BDI and BAI, the DASS scales showed greater separation in factor loadings. The DASS Anxiety scale correlated 0.81 with the BAI, and the DASS Depression scale correlated 0.74 with the BDI. Factor analyses suggested that the BDI differs from the DASS Depression scale primarily in that the BDI includes items such as weight loss, insomnia, somatic preoccupation and irritability, which fail to discriminate between depression and other affective states. The factor structure of the combined BDI and BAI items was virtually identical to that reported by Beck for a sample of diagnosed depressed and anxious patients, supporting the view that these clinical states are more severe expressions of the same states that may be discerned in normals. Implications of the results for the conceptualisation of depression, anxiety and tension/stress are considered, and the utility of the DASS scales in discriminating between these constructs is discussed.",
"title": ""
}
] |
[
{
"docid": "b9a893fb526955b5131860a1402e2f7c",
"text": "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.",
"title": ""
},
{
"docid": "89ef3b70b0afdf345900116e0bf63bbd",
"text": "We extend the alternating minimization algorithm recently proposed in [Y. Wang, J. Yang, W. Yin, and Y. Zhang, SIAM J. Imag. Sci., 1 (2008), pp. 248–272]; [J. Yang, W. Yin, Y. Zhang, and Y. Wang, SIAM J. Imag. Sci., 2 (2009), pp. 569–592] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation and a data fidelity term measured in the 1-norm, and is applicable to both salt-and-pepper and random-valued impulsive noise. We derive the algorithm by applying the well-known quadratic penalty function technique and prove attractive convergence properties, including finite convergence for some variables and q-linear convergence rate. Under periodic boundary conditions, the main computational requirements of the algorithm are fast Fourier transforms and a low-complexity Gaussian elimination procedure. Numerical results on images with different blurs and impulsive noise are presented to demonstrate the efficiency of the algorithm. In addition, it is numerically compared to the least absolute deviation method [H. Y. Fu, M. K. Ng, M. Nikolova, and J. L. Barlow, SIAM J. Sci. Comput., 27 (2006), pp. 1881–1902] and the two-phase method [J. F. Cai, R. Chan, and M. Nikolova, AIMS J. Inverse Problems and Imaging, 2 (2008), pp. 187–204] for recovering grayscale images. We also present results of recovering multichannel images.",
"title": ""
},
{
"docid": "5b82b7ad303c0db3085e9495756b602c",
"text": "Modern Web 2.0 pages combine scripts from several sources into a single client-side JavaScript program with almost no isolation. In order to prevent attacks from an untrusted third-party script or cross-site scripting, tracking provenance of data is imperative. However, no browser offers this security mechanism. This work presents the first information flow control mechanism for full JavaScript. We track information flow dynamically as much as possible but rely on intra-procedural static analysis to capture implicit flow. Our analysis handles even the dreaded eval function soundly and incorporates flow based on JavaScript's prototype inheritance. We implemented our analysis in a production JavaScript engine and report both qualitative as well as quantitative evaluation results.",
"title": ""
},
{
"docid": "88d1062b03e96c8c50c6ee8923cb32da",
"text": "On the one hand this paper presents a theoretical method to predict the responses for the parallel coupled microstrip bandpass filters, and on the other hand proposes a new MATLAB simulation interface including all parameters design procedure to predict the filter responses. The main advantage of this developed interface calculator is to enable researchers and engineers to design and determine easily all parameters of the PCMBPF responses with high accuracy and very small CPU time. To validate the numerical method and the corresponding new interface calculator, two PCMBP filters for wireless communications are designed and compared with the commercial electromagnetic CST simulator and the fabricated prototype respectively. Measured results show good agreement with those obtained by numerical method and simulations.",
"title": ""
},
{
"docid": "07db8fea11297fea2def9440a7d614dc",
"text": "We present the 2017 Visual Domain Adaptation (VisDA) dataset and challenge, a large-scale testbed for unsupervised domain adaptation across visual domains. Unsupervised domain adaptation aims to solve the real-world problem of domain shift, where machine learning models trained on one domain must be transferred and adapted to a novel visual domain without additional supervision. The VisDA2017 challenge is focused on the simulation-to-reality shift and has two associated tasks: image classification and image segmentation. The goal in both tracks is to first train a model on simulated, synthetic data in the source domain and then adapt it to perform well on real image data in the unlabeled test domain. Our dataset is the largest one to date for cross-domain object classification, with over 280K images across 12 categories in the combined training, validation and testing domains. The image segmentation dataset is also large-scale with over 30K images across 18 categories in the three domains. We compare VisDA to existing cross-domain adaptation datasets and provide a baseline performance analysis, as well as results of the challenge.",
"title": ""
},
{
"docid": "e54a0387984553346cf718a6fbe72452",
"text": "Learning distributed representations for relation instances is a central technique in downstream NLP applications. In order to address semantic modeling of relational patterns, this paper constructs a new dataset that provides multiple similarity ratings for every pair of relational patterns on the existing dataset (Zeichner et al., 2012). In addition, we conduct a comparative study of different encoders including additive composition, RNN, LSTM, and GRU for composing distributed representations of relational patterns. We also present Gated Additive Composition, which is an enhancement of additive composition with the gating mechanism. Experiments show that the new dataset does not only enable detailed analyses of the different encoders, but also provides a gauge to predict successes of distributed representations of relational patterns in the relation classification task.",
"title": ""
},
{
"docid": "a19c9b54c2ed3ae44c7ec1908d023ef6",
"text": "BACKGROUND AND PURPOSE\nThe etiology of small fiber neuropathy (SFN) often remains unclear. Since SFN may be the only symptom of late-onset Fabry disease, it may be underdiagnosed in patients with idiopathic polyneuropathy. We aimed to uncover the etiological causes of seemingly idiopathic SFN by applying a focused investigatory procedure, to describe the clinical phenotype of true idiopathic SFN, and to elucidate the possible prevalence of late-onset Fabry disease in these patients.\n\n\nMETHODS\nForty-seven adults younger than 60 years with seemingly idiopathic pure or predominantly small fiber sensory neuropathy underwent a standardized focused etiological and clinical investigation. The patients deemed to have true idiopathic SFN underwent genetic analysis of the alpha-galactosidase A gene (GLA) that encodes the enzyme alpha-galactosidase A (Fabry disease).\n\n\nRESULTS\nThe following etiologies were identified in 12 patients: impaired glucose tolerance (58.3%), diabetes mellitus (16.6%), alcohol abuse (8.3%), mitochondrial disease (8.3%), and hereditary neuropathy (8.3%). Genetic alterations of unknown clinical significance in GLA were detected in 6 of the 29 patients with true idiopathic SFN, but this rate did not differ significantly from that in healthy controls (n=203). None of the patients with genetic alterations in GLA had significant biochemical abnormalities simultaneously in blood, urine, and skin tissue.\n\n\nCONCLUSIONS\nA focused investigation may aid in uncovering further etiological factors in patients with seemingly idiopathic SFN, such as impaired glucose tolerance. However, idiopathic SFN in young to middle-aged Swedish patients does not seem to be due to late-onset Fabry disease.",
"title": ""
},
{
"docid": "07fc4ce339369ecd744ab180c5b56b45",
"text": "The main objective of this study was to identify successful factors in implementing an e-learning program. Existing literature has identified several successful factors in implementing an e-learning program. These factors include program content, web page accessibility, learners’ participation and involvement, web site security and support, institution commitment, interactive learning environment, instructor competency, and presentation and design. All these factors were tested together with other related criteria which are important for e-learning program implementation. The samples were collected based on quantitative methods, specifically, self-administrated questionnaires. All the criteria that were tested to see if they were important in an e-learning program implementation.",
"title": ""
},
{
"docid": "2223186282138da53f798ae32f11b7e4",
"text": "BACKGROUND\nTo assess the effectiveness of physical therapy (PT) interventions on functioning in children with cerebral palsy (CP).\n\n\nMETHODS\nA search was made in Medline, Cinahl, PEDro and the Cochrane library for the period 1990 to February 2007. Only randomized controlled trials (RCTs) on PT interventions in children with diagnosed CP were included. Two reviewers independently assessed the methodological quality and extracted the data. The outcomes measured in the trials were classified using the International Classification of Functioning, Disability and Health (ICF).\n\n\nRESULTS\nTwenty-two trials were identified. Eight intervention categories were distinguished. Four trials were of high methodological quality. Moderate evidence of effectiveness was established for two intervention categories: effectiveness of upper extremity treatments on attained goals and active supination, and of prehensile hand treatment and neurodevelopmental therapy (NDT) or NDT twice a week on developmental status, and of constraint-induced therapy on amount and quality of hand use. Moderate evidence of ineffectiveness was found of strength training on walking speed and stride length. Conflicting evidence was found for strength training on gross motor function. For the other intervention categories the evidence was limited due to low methodological quality and the statistically insignificant results of the studies.\n\n\nCONCLUSION\nDue to limitations in methodological quality and variations in population, interventions and outcomes, mostly limited evidence on the effectiveness of most PT interventions is available through RCTs. Moderate evidence was found for some effectiveness of upper extremity training. Well-designed trials are needed especially for focused PT interventions.",
"title": ""
},
{
"docid": "8a8b33eabebb6d53d74ae97f8081bf7b",
"text": "Social networks are inevitable part of modern life. A class of social networks is those with both positive (friendship or trust) and negative (enmity or distrust) links. Ranking nodes in signed networks remains a hot topic in computer science. In this manuscript, we review different ranking algorithms to rank the nodes in signed networks, and apply them to the sign prediction problem. Ranking scores are used to obtain reputation and optimism, which are used as features in the sign prediction problem. Reputation of a node shows patterns of voting towards the node and its optimism demonstrates how optimistic a node thinks about others. To assess the performance of different ranking algorithms, we apply them on three signed networks including Epinions, Slashdot and Wikipedia. In this paper, we introduce three novel ranking algorithms for signed networks and compare their ability in predicting signs of edges with already existing ones. We use logistic regression as the predictor and the reputation and optimism values for the trustee and trustor as features (that are obtained based on different ranking algorithms). We find that ranking algorithms resulting in correlated ranking scores, leads to almost the same prediction accuracy. Furthermore, our analysis identifies a number of ranking algorithms that result in higher prediction accuracy compared to others.",
"title": ""
},
{
"docid": "774bf4b0a2c8fe48607e020da2737041",
"text": "A class of three-dimensional planar arrays in substrate integrated waveguide (SIW) technology is proposed, designed and demonstrated with 8 × 16 elements at 35 GHz for millimeter-wave imaging radar system applications. Endfire element is generally chosen to ensure initial high gain and broadband characteristics for the array. Fermi-TSA (tapered slot antenna) structure is used as element to reduce the beamwidth. Corrugation is introduced to reduce the resulting antenna physical width without degradation of performance. The achieved measured gain in our demonstration is about 18.4 dBi. A taper shaped air gap in the center is created to reduce the coupling between two adjacent elements. An SIW H-to-E-plane vertical interconnect is proposed in this three-dimensional architecture and optimized to connect eight 1 × 16 planar array sheets to the 1 × 8 final network. The overall architecture is exclusively fabricated by the conventional PCB process. Thus, the developed SIW feeder leads to a significant reduction in both weight and cost, compared to the metallic waveguide-based counterpart. A complete antenna structure is designed and fabricated. The planar array ensures a gain of 27 dBi with low SLL of 26 dB and beamwidth as narrow as 5.15 degrees in the E-plane and 6.20 degrees in the 45°-plane.",
"title": ""
},
{
"docid": "6bfa37ad9f86763381c74d1501ec808d",
"text": "Spectrum-based fault localization is amongst the most effective techniques for automatic fault localization. However, abstractions of program execution traces, one of the required inputs for this technique, require instrumentation of the software under test at a statement level of granularity in order to compute a list of potential faulty statements. This introduces a considerable overhead in the fault localization process, which can even become prohibitive in, e.g., resource constrained environments. To counter this problem, we propose a new approach, coined Dynamic Code Coverage (DCC), aimed at reducing this instrumentation overhead. This technique, by means of using coarser instrumentation, starts by analyzing coverage traces for large components of the system under test. It then progressively increases the instrumentation detail for faulty components, until the statement level of detail is reached. To assess the validity of our proposed approach, an empirical evaluation was performed, injecting faults in six real-world software projects. The empirical evaluation demonstrates that the dynamic code coverage approach reduces the execution overhead that exists in spectrum-based fault localization, and even presents a more concise potential fault ranking to the user. We have observed execution time reductions of 27% on average and diagnostic report size reductions of 77% on average.",
"title": ""
},
{
"docid": "62319a41108f8662f6237a3935ffa8c6",
"text": "This interpretive study examined how the marriage renewal ritual reflects the social construction of marriage in the United States. Two culturally prominent ideologies of marriage were interwoven in our interviews of 25 married persons who had renewed their marriage vows: (a) a dominant ideology of community and (b) a more muted ideology of individualism. The ideology of community was evidenced by a construction of marriage featuring themes of public accountability, social embeddedness, and permanence. By contrast, the ideology of individualism constructed marriage around themes of love, choice, and individual growth. Most interpersonal communication scholars approach the study of marriage in one of two ways: (a) marriage as context, or (b) marriage as outcome. In contrast, in the present study we adopt an alternative way to envision marriage: marriage as cultural performance. We frame this study using two complementary theoretical perspectives: social constructionism and ritual performance theory. In particular, we examine how the cultural performance of marriage renewal rituals reflects the social construction of marriage in the United States. In an interpretive analysis of interviews with marital partners who had recently renewed their marriage vows, we examine the extent to which the two most prominent ideological perspectives on marriage—individualism and community—organize the meaning of marriage for our participants. B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 2 The Socially Contested Construction of Marriage Communication scholars interested in face-to-face interaction tend to adopt one of two general approaches to the study of marriage, what Whitchurch and Dickson (1999) have called the interpersonal communication approach and the family communication approach. The family communication approach, with which the present study is aligned, views communication as constitutive of the family. That is, through their communicative practices, parties construct their social reality of who their family is and the meanings that organize it. From this constitutive, or social constructionist perspective, social reality is an ongoing process of producing and reproducing meanings and social patterns through the interchanges among people (Berger & Luckmann, 1966; Burr, 1995; Gergen, 1994). From a family communication perspective, marriage is thus an ongoing discursive accomplishment. It is achieved through a myriad of interaction practices, including but not limited to, private exchanges between husbands and wives, exchanges between the couple and their extended kinship and friendship networks, public and private rituals such as weddings and anniversaries, and public discourse by politicians and others surrounding family values. Whitchurch and Dickson (1999) argued that, by contrast, the interpersonal communication approach views marriage as an independent or a dependent variable whose functioning in the cause-and-effect world of human behavior can be determined. For example, interpersonal communication scholars often frame marriage as an antecedent contextual variable in examining how various communicative phenomena are enacted in married couples compared with nonmarried couples, or in the premarital compared with postmarital stages of relationship development. Interpersonal communication scholars often also consider marriage as a dependent variable in examining which causal variables lead courtship pairs to marry or keep married couples from breaking up, such as the extent to which such communication phenomena as conflict or disclosive openness during courtship predict whether a couple will wed. Advocates of a constitutive or social constructionist perspective argue that the discursive production and reproduction of the social order is far from the univocal, consensually based model that scholars once envisioned (Baxter & Montgomery, 1996). Instead, the social world is a cross-current of multiple, often competing, conflictual perspectives. The social order is wrought from multivocal negotiations in which different interests, ideologies, and beliefs interact on an ongoing basis. The process of “social ordering” is not a monologic conversation of seamless coherence and consensus; rather, it is a pluralistic cacophony of discursive renderings, a multiplicity of negotiations in which different lived experiences and different systems of meaning are at stake (Billig, Condor, Edwards, Gane, Middleton, & Radley, 1988; Shotter, 1993). As Bakhtin (1981) expressed: “Every concrete utterance . . . serves as a point where centrifugal as well as centripetal forces are brought to bear. The processes of centralization and decentralization, of unification and disunification, intersect in the utterance” (p. 272). Thus, interaction events are enacted dialogically, with multiple “voices,” or perspectives, competing for discursive dominance or privilege as the hegemonic, centripetal center of a given cultural conversation in the moment. Social life is a collection of dialogues between centripetal and centrifugal groups, beliefs, ideologies, and perspectives. B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 3 In modern American society, the institution of marriage is subject to endless negotiation by those who enact and discuss it. Existing research suggests that marriage is a contested terrain whose boundary is disputed by scholars and laypersons alike. One belief is that marriage is essentially the isolated domain of the two married spouses, a private haven separate from the obligations and constraints of the broader social order. The other belief is that marriage is a social institution that is embedded practically and morally in the broader society. Bellah and his colleagues (Bellah, Madsen, Sullivan, Swidler, & Tipton, 1985) have argued that this “boundary dispute” surrounding marriage reflects an omnipresent ideological tension in the American society that can be traced to precolonial times—a tension between the cultural strands of utilitarian/expressive individualism and moral/ social community. The marriage of utilitarian/expressive individualism emphasizes freedom from societal traditions and obligations, privileging instead its private existence in fulfilling the emotional and psychological needs of the two spouses. Marriage, according to this ideology, is not conceived as a binding obligation; rather, it is viewed as existing only as the expression of the choices of the free selves who constitute the union. Marriage is built on love for partner, expressive openness between partners, self-development, and self-gratification. It is a psychological contract negotiated between self-fulfilled individuals acting in their own self-interests. Should marriage cease to be gratifying to the selves in it, it should naturally end. Bellah et al. (1985) argue that this conception of marriage dominates the discursive landscape of modern American society, occupying, in Bakhtin’s (1981) terms, the centripetal center. By contrast, the moral/social community view of marriage emphasizes its existence as a social institution with obligations to uphold traditional values of life-long commitment and duty, and to cohere with other social institutions in maintaining the existing moral and social order. According to this second ideology, marriage is anchored by social obligation—expectations, duties, and accountabilities to others. In this way, marriage is grounded in its ties to the larger society and is not simply a private haven for emotional gratification and intimacy for the two spouses. Bellah et al. (1985) argue that this view of marriage, although clearly distinguishable in the discursive landscape of modern American society, occupies the centrifugal margin rather than the hegemonic center in modern social constructions of marriage in the United States. These two cultural ideologies of marriage also are readily identifiable in existing social scientific research on marital communication (Allan, 1993). The “private haven” ideology is the one that dominates existing research on communication in marriage (Milardo & Wellman, 1992). In this sort of research on marital communication, scholars draw a clear boundary demarcation around the spousal unit and proceed to understand how marriage works by directing their empirical gaze inward to the psychological characteristics of the two married persons and the interactions that take place within this dyad (Duck, 1993). By contrast, other more sociologically oriented scholars who study communication in marriage emphasize that the marital relationship is different from its nonmarital counterparts of romantic and cohabiting couples precisely because of its status as an institutionalized social unit (e.g., McCall, McCall, Denzin, Suttles, & Kurth, 1970). Scholars who adopt the latter view direct their empirical gaze outside marital dyads to examine how marriage is B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 4 enacted in the presence of societal influences, such as legitimization and acceptance of a pair by their kinship and friendship networks, and societal barriers to marital dissolution (e.g., Milardo, 1988). A third approach to the study of marriage is identifiable in the growing number of dialogically oriented scholars interested in communication in personal relationships who are pointing to the status of marriage as simultaneously a private culture of two as well as an institutionalized element of the broader social order (e.g., Brown, Airman, Werner, 1992; Montgomery, 1992). According to Shotter (1993) and Bellah et al. (1985), couples face this dilemma of double accountability on an ongoing basis. Although the ideology of utilitarian/expressive individualism is given dominance, “most Americans are, in fact, caught between ideals of freedom and obligation” (Bellah et al., p. 102). For example, Shotter",
"title": ""
},
{
"docid": "8cdd54a8bd288692132b57cb889b2381",
"text": "This research deals with the soft computing methodology of fuzzy cognitive map (FCM). Here a mathematical description of FCM is presented and a new methodology based on fuzzy logic techniques for developing the FCM is examined. The capability and usefulness of FCM in modeling complex systems and the application of FCM to modeling and describing the behavior of a heat exchanger system is presented. The applicability of FCM to model the supervisor of complex systems is discussed and the FCM-supervisor for evaluating the performance of a system is constructed; simulation results are presented and discussed.",
"title": ""
},
{
"docid": "0f10aa71d58858ea1d8d7571a7cbfe22",
"text": "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.",
"title": ""
},
{
"docid": "cb702c48a242c463dfe1ac1f208acaa2",
"text": "In 2011, Lake Erie experienced the largest harmful algal bloom in its recorded history, with a peak intensity over three times greater than any previously observed bloom. Here we show that long-term trends in agricultural practices are consistent with increasing phosphorus loading to the western basin of the lake, and that these trends, coupled with meteorological conditions in spring 2011, produced record-breaking nutrient loads. An extended period of weak lake circulation then led to abnormally long residence times that incubated the bloom, and warm and quiescent conditions after bloom onset allowed algae to remain near the top of the water column and prevented flushing of nutrients from the system. We further find that all of these factors are consistent with expected future conditions. If a scientifically guided management plan to mitigate these impacts is not implemented, we can therefore expect this bloom to be a harbinger of future blooms in Lake Erie.",
"title": ""
},
{
"docid": "073a2c6743b95913b090dfc17204f880",
"text": "Recent work has explored the problem of autonomous navigation by imitating a teacher and learning an end-toend policy, which directly predicts controls from raw images. However, these approaches tend to be sensitive to mistakes by the teacher and do not scale well to other environments or vehicles. To this end, we propose Observational Imitation Learning (OIL), a novel imitation learning variant that supports online training and automatic selection of optimal behavior by observing multiple imperfect teachers. We apply our proposed methodology to the challenging problems of autonomous driving and UAV racing. For both tasks, we utilize the Sim4CV simulator [18] that enables the generation of large amounts of synthetic training data and also allows for online learning and evaluation. We train a perception network to predict waypoints from raw image data and use OIL to train another network to predict controls from these waypoints. Extensive experiments demonstrate that our trained network outperforms its teachers, conventional imitation learning (IL) and reinforcement learning (RL) baselines and even humans in simulation.",
"title": ""
},
{
"docid": "c690a72c51e1aac0f061d71d5a29e15b",
"text": "This research examines the relationship between ERP systems and innovation from a knowledgebased perspective. Building upon the multi-dimensional conceptualization of absorptive capacity by Zahra and George [Zahra, S.A., George, G., 2002. Absorptive capacity: a review, reconceptualization, and extension. Academy of Management Journal 27 (2), 185–203], a theoretical framework is developed to specify the relationships between ERP-related knowledge impacts and potential/realized absorptive capacity for business process innovation. The implication of the knowledge-based analysis in this paper is that ERP systems present dialectical contradictions, both enabling and constraining business process innovation. The model highlights areas where active management has potential to enhance the capabilities of a firm for sustained innovation of its business processes. Future research directions are also outlined. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "07179377e99a40beffcb50ac039ca503",
"text": "RF-powered computers are small devices that compute and communicate using only the power that they harvest from RF signals. While existing technologies have harvested power from ambient RF sources (e.g., TV broadcasts), they require a dedicated gateway (like an RFID reader) for Internet connectivity. We present Wi-Fi Backscatter, a novel communication system that bridges RF-powered devices with the Internet. Specifically, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity to RF-powered devices. To show Wi-Fi Backscatter's feasibility, we build a hardware prototype and demonstrate the first communication link between an RF-powered device and commodity Wi-Fi devices. We use off-the-shelf Wi-Fi devices including Intel Wi-Fi cards, Linksys Routers, and our organization's Wi-Fi infrastructure, and achieve communication rates of up to 1 kbps and ranges of up to 2.1 meters. We believe that this new capability can pave the way for the rapid deployment and adoption of RF-powered devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled.",
"title": ""
},
{
"docid": "bfcef77dedf22118700737904be13c0e",
"text": "Autonomous operation is becoming an increasingly important factor for UAVs. It enables a vehicle to decide on the most appropriate action under consideration of the current vehicle and environment state. We investigated the decision-making process using the cognitive agent-based architecture Soar, which uses techniques adapted from human decision-making. Based on Soar an agent was developed which enables UAVs to autonomously make decisions and interact with a dynamic environment. One or more UAV agents were then tested in a simulation environment which has been developed using agent-based modelling. By simulating a dynamic environment, the capabilities of a UAV agent can be tested under defined conditions and additionally its behaviour can be visualised. The agent’s abilities were demonstrated using a scenario consisting of a highly dynamic border-surveillance mission with multiple autonomous UAVs. We can show that the autonomous agents are able to execute the mission successfully and can react adaptively to unforeseen events. We conclude that using a cognitive architecture is a promising approach for modelling autonomous behaviour.",
"title": ""
}
] |
scidocsrr
|
8f1baf0bdef6448d47338bd706032d67
|
Compressed K-Means for Large-Scale Clustering
|
[
{
"docid": "9f746a67a960b01c9e33f6cd0fcda450",
"text": "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.",
"title": ""
}
] |
[
{
"docid": "3fe585dbb422a88f41f1100f9b2dd477",
"text": "Synchronous reluctance motor (SynRM) is a potential candidate for high starting torque requirements of traction drives. Any demagnetization risk is prevented since there is not any permanent magnet on the rotor or stator structure. On the other hand, the high rotor starting current problem, that is common in induction machines is ignored since there is not any winding on the rotor. Indeed, absence of permanent magnet in motor structure and its simplicity leads to lower finished cost in comparison with other competitors. Also high average torque and low ripple content is important in electrical drives employed in electric vehicle applications. High amount of torque ripple is one of the problems of SynRM, which is considered in many researches. In this paper, a new design of the SynRM is proposed in order to reduce the torque ripple while maintaining the average torque. For this purpose, auxiliary flux barriers in the rotor structure are employed that reduce the torque ripple significantly. Proposed design electromagnetic performance is simulated by finite element analysis. It is shown that the proposed design reduces torque ripple significantly without any reduction in average torque.",
"title": ""
},
{
"docid": "90b0ee9cf92c3ff905c2dffda9e3e509",
"text": "Julius is an open-source large-vocabulary speech recognition software used for both academic research and industrial applications. It executes real-time speech recognition of a 60k-word dictation task on low-spec PCs with small footprint, and even on embedded devices. Julius supports standard language models such as statistical N-gram model and rule-based grammars, as well as Hidden Markov Model (HMM) as an acoustic model. One can build a speech recognition system of his own purpose, or can integrate the speech recognition capability to a variety of applications using Julius. This article describes an overview of Julius, major features and specifications, and summarizes the developments conducted in the recent years.",
"title": ""
},
{
"docid": "6be73a6559c7f1b99cec51125169fd5b",
"text": "We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.",
"title": ""
},
{
"docid": "27beef0016282d21eeb95c0f830c6fc2",
"text": "Static analysis has been successfully used in many areas, from verifying mission-critical software to malware detection. Unfortunately, static analysis often produces false positives, which require significant manual effort to resolve. In this paper, we show how to overlay a probabilistic model, trained using domain knowledge, on top of static analysis results, in order to triage static analysis results. We apply this idea to analyzing mobile applications. Android application components can communicate with each other, both within single applications and between different applications. Unfortunately, techniques to statically infer Inter-Component Communication (ICC) yield many potential inter-component and inter-application links, most of which are false positives. At large scales, scrutinizing all potential links is simply not feasible. We therefore overlay a probabilistic model of ICC on top of static analysis results. Since computing the inter-component links is a prerequisite to inter-component analysis, we introduce a formalism for inferring ICC links based on set constraints. We design an efficient algorithm for performing link resolution. We compute all potential links in a corpus of 11,267 applications in 30 minutes and triage them using our probabilistic approach. We find that over 95.1% of all 636 million potential links are associated with probability values below 0.01 and are thus likely unfeasible links. Thus, it is possible to consider only a small subset of all links without significant loss of information. This work is the first significant step in making static inter-application analysis more tractable, even at large scales.",
"title": ""
},
{
"docid": "37437fb45a309bc887ee68da304ec370",
"text": "We introduce WebGazer, an online eye tracker that uses common webcams already present in laptops and mobile devices to infer the eye-gaze locations of web visitors on a page in real time. The eye tracking model self-calibrates by watching web visitors interact with the web page and trains a mapping between features of the eye and positions on the screen. This approach aims to provide a natural experience to everyday users that is not restricted to laboratories and highly controlled user studies. WebGazer has two key components: a pupil detector that can be combined with any eye detection library, and a gaze estimator using regression analysis informed by user interactions. We perform a large remote online study and a small in-person study to evaluate WebGazer. The findings show that WebGazer can learn from user interactions and that its accuracy is sufficient for approximating the user’s gaze. As part of this paper, we release the first eye tracking library that can be easily integrated in any website for real-time gaze interactions, usability studies, or web research.",
"title": ""
},
{
"docid": "eceaa3b1abefc76a43ed4843f3aef7d2",
"text": "Materials exposed to the elements change in appearance because of aging. Because wood is an organic substance, cracks and the surface erosion occur easily. To produce realistic computer graphic images, we need simulate the aging phenomenon also. Here, we propose a visual simulation of the distortion, cracking, and erosion of wood. In this method, wood is represented by a tetrahedral mesh. By setting semi-physical variables at each vertex in this mesh, a visual simulation of wood aging can be accomplished. The surface of the wood is defined by values assigned to the superficial tetrahedral mesh vertices. Changes in the surface are achieved by value changes. The effectiveness of this method is demonstrated by applications on a plank and shapes such as a bunny and an armadillo statue.",
"title": ""
},
{
"docid": "f563b8d89c88b41a7b23c8dfe330044c",
"text": "Recently, the V2 type of constant on-time control has been widely used to improve light-load efficiency. In V2 implementation, the nonlinear PWM modulator is much more complicated than usual, since not only is the inductor current information fed back to the modulator, but the capacitor voltage ripple information is also fed back to the modulator. Generally speaking, there is no sub-harmonic oscillation in constant on-time control. However, the delay due to the capacitor ripple results in sub-harmonic oscillation in V2 constant on-time control. So far, there has been no accurate model to predict instability issue due to the capacitor ripple. This paper presents a new modeling approach for V2 constant on-time control. The power stage, the switches and the PWM modulator are treated as a single entity and modeled based the describing function method. The model for the V2 constant on-time control achieved by the new approach can accurately predict sub-harmonic oscillation. Two solutions are discussed to solve the instability issue. The extension of the model to other types of V2 current-mode control is also shown in the paper. Simulation and experimental results are used to verify the proposed model.",
"title": ""
},
{
"docid": "edb17cb58e7fd5862c84b53e9c9f2915",
"text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.chb.2011.12.003 ⇑ Corresponding author. Tel.: +49 40 41346826; fax E-mail addresses: sabine.trepte@uni-hamburg.de ( uni-hamburg.de (L. Reinecke), keno.juechems@stu Juechems). Online gaming has gained millions of users around the globe, which have been shown to virtually connect, to befriend, and to accumulate online social capital. Today, as online gaming has become a major leisure time activity, it seems worthwhile asking for the underlying factors of online social capital acquisition and whether online social capital increases offline social support. In the present study, we proposed that the online game players’ physical and social proximity as well as their mutual familiarity influence bridging and bonding social capital. Physical proximity was predicted to positively influence bonding social capital online. Social proximity and familiarity were hypothesized to foster both online bridging and bonding social capital. Additionally, we hypothesized that both social capital dimensions are positively related to offline social support. The hypotheses were tested with regard to members of e-sports clans. In an online survey, participants (N = 811) were recruited via the online portal of the Electronic Sports League (ESL) in several countries. The data confirmed all hypotheses, with the path model exhibiting an excellent fit. The results complement existing research by showing that online gaming may result in strong social ties, if gamers engage in online activities that continue beyond the game and extend these with offline activities. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d054f82c051282c4706d1d5468ee821d",
"text": "We present the LaRED, a Large RGB-D Extensible hand gesture Dataset, recorded with an Intel's newly-developed short range depth camera. This dataset is unique and differs from the existing ones in several aspects. Firstly, the large volume of data recorded: 243, 000 tuples where each tuple is composed of a color image, a depth image, and a mask of the hand region. Secondly, the number of different classes provided: a total of 81 classes (27 gestures in 3 different rotations). Thirdly, the extensibility of dataset: the software used to record and inspect the dataset is also available, giving the possibility for future users to increase the number of data as well as the number of gestures. Finally, in this paper, some experiments are presented to characterize the dataset and establish a baseline as the start point to develop more complex recognition algorithms. The LaRED dataset is publicly available at: http://mclab.citi.sinica.edu.tw/dataset/lared/lared.html.",
"title": ""
},
{
"docid": "6acd1583b23a65589992c3297250a603",
"text": "Trichostasis spinulosa (TS) is a common but rarely diagnosed disease. For diagnosis, it's sufficient to see a bundle of vellus hair located in a keratinous sheath microscopically. In order to obtain these vellus hair settled in comedone-like openings, Standard skin surface biopsy (SSSB), a non-invasive method was chosen. It's aimed to remind the differential diagnosis of TS in treatment-resistant open comedone-like lesions and discuss the SSSB method in diagnosis. A 25-year-old female patient was admitted with a complaint of the black spots located on bilateral cheeks and nose for 12 years. In SSSB, multiple vellus hair bundles in funnel-shaped structures were observed under the microscope, and a diagnosis of 'TS' was made. After six weeks of treatment with tretinoin 0.025% and 4% erythromycin jel topically, the appearance of black macules was significantly reduced. Treatment had to be terminated due to her pregnancy, and the lesions recurred within 1 month. It's believed that TS should be considered in the differential diagnosis of treatment-resistant open comedone-like lesions, and SSSB might be an inexpensive and effective alternative method for the diagnosis of TS.",
"title": ""
},
{
"docid": "549c0bacda4b663cb518003883d09f2c",
"text": "The key to building an evolvable dialogue system in real-world scenarios is to ensure an affordable on-line dialogue policy learning, which requires the on-line learning process to be safe, efficient and economical. But in reality, due to the scarcity of real interaction data, the dialogue system usually grows slowly. Besides, the poor initial dialogue policy easily leads to bad user experience and incurs a failure of attracting users to contribute training data, so that the learning process is unsustainable. To accurately depict this, two quantitative metrics are proposed to assess safety and efficiency issues. For solving the unsustainable learning problem, we proposed a complete companion teaching framework incorporating the guidance from the human teacher. Since the human teaching is expensive, we compared various teaching schemes answering the question how and when to teach, to economically utilize teaching budget, so that make the online learning process affordable.",
"title": ""
},
{
"docid": "260b39661df5cb7ddb9c4cf7ab8a36ba",
"text": "Deblurring camera-based document image is an important task in digital document processing, since it can improve both the accuracy of optical character recognition systems and the visual quality of document images. Traditional deblurring algorithms have been proposed to work for natural-scene images. However the natural-scene images are not consistent with document images. In this paper, the distinct characteristics of document images are investigated. We propose a content-aware prior for document image deblurring. It is based on document image foreground segmentation. Besides, an upper-bound constraint combined with total variation based method is proposed to suppress the rings in the deblurred image. Comparing with the traditional general purpose deblurring methods, the proposed deblurring algorithm can produce more pleasing results on document images. Encouraging experimental results demonstrate the efficacy of the proposed method.",
"title": ""
},
{
"docid": "5cc7e2a8aa48d47e623823ce1ae2d206",
"text": "We present an unsupervised hard EM approach to automatically mapping instructional recipes to action graphs, which define what actions should be performed on which objects and in what order. Recovering such structures can be challenging, due to unique properties of procedural language where, for example, verbal arguments are commonly elided when they can be inferred from context and disambiguation often requires world knowledge. Our probabilistic model incorporates aspects of procedural semantics and world knowledge, such as likely locations and selectional preferences for different actions. Experiments with cooking recipes demonstrate the ability to recover high quality action graphs, outperforming a strong sequential baseline by 8 points in F1, while also discovering general-purpose knowledge about cooking.",
"title": ""
},
{
"docid": "8c1c9ba389d0e76f1dfafedcb8e3e095",
"text": "Recommender system has become an effective tool for information filtering, which usually provides the most useful items to users by a top-k ranking list. Traditional recommendation techniques such as Nearest Neighbors (NN) and Matrix Factorization (MF) have been widely used in real recommender systems. However, neither approaches can well accomplish recommendation task since that: (1) most NN methods leverage the neighbor's behaviors for prediction, which may suffer the severe data sparsity problem; (2) MF methods are less sensitive to sparsity, but neighbors' influences on latent factors are not fully explored, since the latent factors are often used independently. To overcome the above problems, we propose a new framework for recommender systems, called collaborative factorization. It expresses the user as the combination of his own factors and those of the neighbors', called collaborative latent factors, and a ranking loss is then utilized for optimization. The advantage of our approach is that it can both enjoy the merits of NN and MF methods. In this paper, we take the logistic loss in RankNet and the likelihood loss in ListMLE as examples, and the corresponding collaborative factorization methods are called CoF-Net and CoF-MLE. Our experimental results on three benchmark datasets show that they are more effective than several state-of-the-art recommendation methods.",
"title": ""
},
{
"docid": "6adf6cd920abf2987be8963b2f1641d6",
"text": "This paper presents a diffusion method for generating terrains from a set of parameterized curves that characterize the landform features such as ridge lines, riverbeds or cliffs. Our approach provides the user with an intuitive vector-based feature-oriented control over the terrain. Different types of constraints (such as elevation, slope angle and roughness) can be attached to the curves so as to define the shape of the terrain. The terrain is generated from the curve representation by using an efficient multigrid diffusion algorithm. The algorithm can be efficiently implemented on the GPU, which allows the user to interactively create a vast variety of landscapes.",
"title": ""
},
{
"docid": "9e1a375cf8b90e054f2743a25e5af15b",
"text": "The term “employee turnover” is a crucial metric that's usually central to organizations workforce planning and strategy. the explanations why staff leave their current positions; not simply the actual fact that they leave have crucial implications for future retention rates among current staff, job satisfaction and employee engagement and an organization’s ability to draw in proficient folks for job vacancies. The impact of turnover has received substantial attention by senior management, human resources professionals and construction engineers particularly project engineers in construction projects. It’s tried to be one among the foremost expensive and ostensibly intractable human resource challenges braving many organizations globally. The aim of this analysis is so, to seek out the particular reasons behind turnover and its damaging effects on the development industries in Kerala. To explore turnover in larger detail, this text can examine the most sources of turnover rate, its effects and advocate some ways on however a company will retain staff and scale back turnover rate in housing industry in Kerala. The results of this study area unit expected to be helpful for numerous construction firms for taking remedial measures to scale back the worker turnover that is the major resource in determining the general success of a project.",
"title": ""
},
{
"docid": "240d47115c8bbf98e15ca4acae13ee62",
"text": "A trusted and active community aided and supported by the Internet of Things (IoT) is a key factor in food waste reduction and management. This paper proposes an IoT based context aware framework which can capture real-time dynamic requirements of both vendors and consumers and perform real-time match-making based on captured data. We describe our proposed reference framework and the notion of smart food sharing containers as enabling technology in our framework. A prototype system demonstrates the feasibility of a proposed approach using a smart container with embedded sensors.",
"title": ""
},
{
"docid": "c25b4015787e56f241cabf5e76cb3cc6",
"text": "Clients with generalized anxiety disorder (GAD) received either (a) applied relaxation and self-control desensitization, (b) cognitive therapy, or (c) a combination of these methods. Treatment resulted in significant improvement in anxiety and depression that was maintained for 2 years. The large majority no longer met diagnostic criteria; a minority sought further treatment during follow-up. No differences in outcome were found between conditions; review of the GAD therapy literature suggested that this may have been due to strong effects generated by each component condition. Finally, interpersonal difficulties remaining at posttherapy, measured by the Inventory of Interpersonal Problems Circumplex Scales (L. E. Alden, J. S. Wiggins, & A. L. Pincus, 1990) in a subset of clients, were negatively associated with posttherapy and follow-up improvement, suggesting the possible utility of adding interpersonal treatment to cognitive-behavioral therapy to increase therapeutic effectiveness.",
"title": ""
},
{
"docid": "83e897a37aca4c349b4a910c9c0787f4",
"text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.",
"title": ""
},
{
"docid": "c210e0a2ba0d8daf6935f4d825319886",
"text": "Monte Carlo integration is a powerful technique for the evaluation of difficult integrals. Applications in rendering include distribution ray tracing, Monte Carlo path tracing, and form-factor computation for radiosity methods. In these cases variance can often be significantly reduced by drawing samples from several distributions, each designed to sample well some difficult aspect of the integrand. Normally this is done by explicitly partitioning the integration domain into regions that are sampled differently. We present a powerful alternative for constructing robust Monte Carlo estimators, by combining samples from several distributions in a way that is provably good. These estimators are unbiased, and can reduce variance significantly at little additional cost. We present experiments and measurements from several areas in rendering: calculation of glossy highlights from area light sources, the “final gather” pass of some radiosity algorithms, and direct solution of the rendering equation using bidirectional path tracing. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.3.3 [Computer Graphics]: Picture/Image Generation; G.1.9 [Numerical Analysis]: Integral Equations— Fredholm equations. Additional",
"title": ""
}
] |
scidocsrr
|
f45d4f44fd03a93233bb12d8c1d0a983
|
Generating Counterfactual Explanations with Natural Language
|
[
{
"docid": "dc3495ec93462e68f606246205a8416d",
"text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"title": ""
},
{
"docid": "b278b9e532600ea1da8c19e07807d899",
"text": "Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system’s inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any existing classification architecture to generate natural language explanations of the classifications. The success of the module relies on the assumption that the network’s computation and reasoning is represented in its internal layer activations. While in principle InterpNET could be applied to any existing classification architecture, it is evaluated via an image classification and explanation task. Experiments on a CUB bird classification and explanation dataset show qualitatively and quantitatively that the model is able to generate high-quality explanations. While the current state-of-the-art METEOR score on this dataset is 29.2, InterpNET achieves a much higher METEOR score of 37.9.",
"title": ""
}
] |
[
{
"docid": "b9538c45fc55caff8b423f6ecc1fe416",
"text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.",
"title": ""
},
{
"docid": "d985c547cd57a25a6724f369da8aa1dd",
"text": "DEFINITION A majority of today’s data is constantly evolving and fundam entally distributed in nature. Data for almost any large-sc ale data-management task is continuously collected over a wide area, and at a much greater rate than ever before. Compared to t aditional, centralized stream processing, querying such la rge-scale, evolving data collections poses new challenges , due mainly to the physical distribution of the streaming data and the co mmunication constraints of the underlying network. Distri buted stream processing algorithms should guarantee efficiency n ot o ly in terms ofspaceand processing time(as conventional streaming techniques), but also in terms of the communication loadimposed on the network infrastructure.",
"title": ""
},
{
"docid": "6d9735b19ab2cb1251bd294045145367",
"text": "Waveguide twists are often necessary to provide polarization rotation between waveguide-based components. At terahertz frequencies, it is desirable to use a twist design that is compact in order to reduce loss; however, these designs are difficult if not impossible to realize using standard machining. This paper presents a micromachined compact waveguide twist for terahertz frequencies. The Rud-Kirilenko twist geometry is ideally suited to the micromachining processes developed at the University of Virginia. Measurements of a WR-1.5 micromachined twist exhibit a return loss near 20 dB and a median insertion loss of 0.5 dB from 600 to 750 GHz.",
"title": ""
},
{
"docid": "be6f84eea50ebe213129f194803de864",
"text": "In this paper we introduce a natural mathematical structure derived from Samuel Beckett’s play “Quad”. We call this structure a binary Beckett-Gray code. We enumerate all codes for n ≤ 6 and give examples for n = 7, 8. Beckett-Gray codes can be realized as successive states of a queue data structure. We show that the binary reflected Gray code can be realized as successive states of two stack data structures.",
"title": ""
},
{
"docid": "d22bf8b3715156722eca5260b90b13bc",
"text": "Tremendous progress has been made in object recognition with deep convolutional neural networks (CNNs), thanks to the availability of large-scale annotated dataset. With the ability of learning highly hierarchical image feature extractors, deep CNNs are also expected to solve the Synthetic Aperture Radar (SAR) target classification problems. However, the limited labeled SAR target data becomes a handicap to train a deep CNN. To solve this problem, we propose a transfer learning based method, making knowledge learned from sufficient unlabeled SAR scene images transferrable to labeled SAR target data. We design an assembled CNN architecture consisting of a classification pathway and a reconstruction pathway, together with a feedback bypass additionally. Instead of training a deep network with limited dataset from scratch, a large number of unlabeled SAR scene images are used to train the reconstruction pathway with stacked convolutional auto-encoders (SCAE) at first. Then, these pre-trained convolutional layers are reused to transfer knowledge to SAR target classification tasks, with feedback bypass introducing the reconstruction loss simultaneously. The experimental results demonstrate that transfer learning leads to a better performance in the case of scarce labeled training data and the additional feedback bypass with reconstruction loss helps to boost the capability of classification pathway.",
"title": ""
},
{
"docid": "23630a10ef231b4c0c10c1828cfce6b3",
"text": "Mobile telecommunications such as voice telephony, SMS, WLAN (wireless local area network) technology, and personal digital assistants (PDAs) are being used by broader and broader sections of society. As a result of this increased usage (particularly for mobile-phone use), there is a small but growing body of research that indicates that the use of mobile communications is influencing how we go about our daily lives from both a social and economic perspective. For example, as mobile-phone usage increases, it is no longer unusual to see mobile phones being used in a wide variety of contexts (e.g., social, business) in various locations (e.g., trains, cafes). Wei and Leung (1999) found that the majority of calls being made by mobile-phone users take place on the streets, on public transport, in shops, and in restaurants. What is the social impact of mobile-phone use in public places and society in general? This is the question that Rich Ling sets out to investigate in his book The Mobile Connection: The Cell Phone's Impact on Society. Ling's book, through a detailed description and analysis of several studies, provides us with a valuable insight into how this relatively new form of technology is changing people's social dynamics in public life. In particular, he focuses on how the mobile phone can be used to create a sense of security for the individual, its use in what he calls the \" micro-coordination \" of everyday life, and the impact the mobile phone has had on the lives of teenagers. In addition , he raises the possibility that the mobile phone could be a disturbing influence in our public life (we all have experienced sitting next to someone on a train having a loud mobile-phone conversation) and takes a detailed look at the SMS communication phenomenon. The first two chapters of the book provide the reader with general background information on the growth of the worldwide mobile-phone market and also provide an",
"title": ""
},
{
"docid": "c21517df671a485888d2dde4af3306da",
"text": "While discussion about knowledge management often centers around how knowledge may best be codified into an explicit format for use in decision support or expert systems, some knowledge best serves the organization when it is kept in tacit form. We draw upon the resource-based view to identify how information technology can best be used during different types of strategic change. Specifically, we suggest that different change strategies focus on different combinations of tacit and explicit knowledge that make certain types of information technology more appropriate in some situations than in others. q 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "5cc26542d0f4602b2b257e19443839b3",
"text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.",
"title": ""
},
{
"docid": "105b179a6cb824f6edb04d703a9f42a8",
"text": "This paper is concerned with the problem of robust H∞ output feedback control for a class of continuous-time Takagi-Sugeno (T-S) fuzzy affine dynamic systems using quantized measurements. The objective is to design a suitable observer-based dynamic output feedback controller that guarantees the global stability of the resulting closed-loop fuzzy system with a prescribed H∞ disturbance attenuation level. Based on common/piecewise quadratic Lyapunov functions combined with S-procedure and some matrix inequality convexification techniques, some new results are developed to the controller synthesis for the underlying continuous-time T-S fuzzy affine systems with unmeasurable premise variables. All the solutions to the problem are formulated in the form of linear matrix inequalities (LMIs). Finally, two simulation examples are provided to illustrate the advantages of the proposed approaches.",
"title": ""
},
{
"docid": "0a3eaf68a3f1f2587f2456cbb29e1f06",
"text": "OBJECTIVE\nTo develop a single trial motor imagery (MI) classification strategy for the brain-computer interface (BCI) applications by using time-frequency synthesis approach to accommodate the individual difference, and using the spatial patterns derived from electroencephalogram (EEG) rhythmic components as the feature description.\n\n\nMETHODS\nThe EEGs are decomposed into a series of frequency bands, and the instantaneous power is represented by the envelop of oscillatory activity, which forms the spatial patterns for a given electrode montage at a time-frequency grid. Time-frequency weights determined by training process are used to synthesize the contributions from the time-frequency domains.\n\n\nRESULTS\nThe present method was tested in nine human subjects performing left or right hand movement imagery tasks. The overall classification accuracies for nine human subjects were about 80% in the 10-fold cross-validation, without rejecting any trials from the dataset. The loci of MI activity were shown in the spatial topography of differential-mode patterns over the sensorimotor area.\n\n\nCONCLUSIONS\nThe present method does not contain a priori subject-dependent parameters, and is computationally efficient. The testing results are promising considering the fact that no trials are excluded due to noise or artifact.\n\n\nSIGNIFICANCE\nThe present method promises to provide a useful alternative as a general purpose classification procedure for MI classification.",
"title": ""
},
{
"docid": "300553571302f85f39ce4902f84ca527",
"text": "Student motivation as an academic enabler for school success is discussed. Contrary to many views, however, the authors conceive of student motivation as a multifaceted construct with different components. Accordingly, the article includes a discussion of four key components of student motivation including academic self-efficacy, attributions, intrinsic motivation, and achievement goals. Research on each of these four components is described, research relating these four components to academic achievement and other academic enablers is reviewed, and suggestions are offered for instruction and assessment. Psychologists and educators have long considered the role of motivation in student achievement and learning (for a review see Graham & Weiner, 1996). Much of the early research on student achievement and learning separated cognitive and motivational factors and pursued very distinct lines of research that did not integrate cognition and motivation. However, since at least the 1980s there has been a sustained research focus on how motivational and cognitive factors interact and jointly influence student learning and achievement. In more colloquial terms, there is a recognition that students need both the cognitive skill and the motivational will to do well in school (Pintrich & Schunk, 2002). This miniseries continues in this tradition by highlighting the contribution of both motivational and cognitive factors for student academic success. The integration of motivational and cognitive factors was facilitated by the shift in motivational theories from traditional achievement motivation models to social cognitive models of motivation (Pintrich & Schunk, 2002). One of the most important assumptions of social cognitive models of motivation is that motivation is a dynamic, multifaceted phenomenon that contrasts with the quantitative view taken by traditional models of motivation. In other words, these newer social cognitive models do not assume that students are either “motivated” or “not motivated” or that student motivation can be characterized in some quantitative manner between two endpoints on a single continuum. Rather, social cognitive models stress that students can be motivated in multiple ways and the important issue is understanding how and why students are motivated for school achievement. This change in focus implies that teachers or school psychologists should not label students as “motivated” or “not motivated” in some global fashion. Furthermore, assessment instruments that generate a single global “motivation” score for students may be misleading in terms of a more multifaceted understanding of student motivation. Accordingly, in the discussion of motivation as an academic enabler, many aspects of student motivation including self-efficacy, attributions, intrinsic motivation, and goals are considered. A second important assumption of social cognitive models of motivation is that motivation is not a stable trait of an individual, but is more situated, contextual, and domain-specific. In other words, not only are students motivated in multiple ways, but their motivation can vary depending on the situation or context in the classroom or school. Although this assumption makes it more difficult for research and assessment efforts, it means that student motivation is conceived as being inherently changeable and sensitive to the context. This provides hope for teachers and school psychologists and suggests that instructional efforts and the design of classrooms and schools can make a difference in motivating students for academic achievement. This situated assumption means that student motivation probably varies as a function of subject matter domains and classrooms (e.g., Bong, 2001). For example, within social cognitive models, motivation is usually assessed for a specific subject area such as math, reading, science, or social studies and in reference to a specific classroom or teacher. In some ways, this also fits with teachers' and parents' own perceptions and experiences as they find that some children are quite motivated for mathematics, whereas others hate it, and also observe these motivational differences with other subject areas as well. However, this implies that assessment instruments that assess general student motivation for school or academics may not be as useful as more domain or context specific assessment tools. A third assumption concerns the central role of cognition in social cognitive models of motivation. That is, it is not just the individual's cultural, demographic, or personality characteristics that influence motivation and achievement directly, or just the contextual characteristics of the classroom environment that shape motivation and achievement, but rather the individual's active regulation of his or her motivation, thinking, and behavior that mediates the relationships between the person, context, and eventual achievement. That is, students' own thoughts about their motivation and learning play a key role in mediating their engagement and subsequent achievement. Following from these three general assumptions, social cognitive motivational theorists have proposed a large number of different motivational constructs that may facilitate or constrain student achievement and learning. Although there are good theoretical reasons for some of these distinctions among different motivational theories and constructs, in many cases they can be confusing and less than helpful in developing applications to improve student motivation and subsequent learning in school (Pintrich, 2000a). Rather than discussing all the different motivational constructs that may be enablers of student achievement and learning, this article will focus on four key families of motivational beliefs (self-efficacy, attributions, intrinsic motivation, and goal orientations). These four families represent the currently accepted major social cognitive motivational theories (Eccles, Wigfield, & Schiefele, 1998; Graham & Weiner, 1996; Pintrich & Schunk, 2002) and, therefore, seem most relevant when thinking about how motivation relates to achievement and other academic enablers. For each of the four general components, the components are defined, a summarization is given for how the motivational component is related to student achievement and learning as well as the other academic enablers discussed in this special issue, and some implications for instruction and assessment are suggested. Although these four families are interrelated, it is beyond the scope of this article to present an interrelated model of self-efficacy, attributions, intrinsic motivation, and goal orientations. Readers interested in a more comprehensive overview may refer to Pintrich and Schunk's (2002) detailed discussion of motivational processes in schooling. Adaptive Self-Efficacy Beliefs as Enablers of Success A common layperson's definition of motivation is that it involves a strong personal interest in a particular subject or activity. Students who are interested are motivated and they learn and achieve because of this strong interest. Although interest as a component of student motivation will be discussed later, one of the more important motivational beliefs for student achievement is self-efficacy, which concerns beliefs about capabilities to do a task or activity. More specifically, self-efficacy has been defined as individuals' beliefs about their performance capabilities in a particular context or a specific task or domain (Bandura, 1997). Self-efficacy is assumed to be situated and contextualized, not a general belief about self-concept or self-esteem. For example, a student might have high self-efficacy for doing algebra problems, but a lower self-efficacy for geometry problems or other subject areas, depending on past successes and failures. These self-efficacy beliefs are distinct from general self-concept beliefs or self-esteem. Although the role of self-efficacy has been studied in a variety of domains including mental health and health behavior such as coping with depression or smoking cessation, business management, and athletic performance, a number of educational psychologists have examined how self-efficacy relates to behavior in elementary and secondary academic settings (e.g., Bandura, 1997; Eccles et al., 1998; Pintrich, 2000b; Pintrich & De Groot, 1990; Schunk, 1989a, 1989b, 1991). In particular, self-efficacy has been positively related to higher levels of achievement and learning as well as a wide variety of adaptive academic outcomes such as higher levels of effort and increased persistence on difficult tasks in both experimental and correlational studies involving students from a variety of age groups (Bandura, 1997; Pintrich & Schunk, 2002). Students who have more positive self-efficacy beliefs (i.e., they believe they can do the task) are more likely to work harder, persist, and eventually achieve at higher levels. In addition, there is evidence that students who have positive self-efficacy beliefs are more likely to choose to continue to take more difficult courses (e.g., advanced math courses) over the course of schooling (Eccles et al., 1998). In our own correlational research with junior high students in Michigan, we have consistently found that self-efficacy beliefs are positively related to student cognitive engagement and their use of self-regulatory strategies (similar in some ways to study skills) as well as general achievement as indexed by grades (e.g., Pintrich, 2000b; Pintrich & De Groot, 1990; Welters, Yu, & Pintrich, 1996). In summary, both experimental and correlational research in schools suggests that self-efficacy is positively related to a host of positive outcomes of schooling such as choice, persistence, cognitive engagement, use of self-regulatory strategies, and actual achievement. This generalization seems to apply to all students, as it is relatively stable across different ages and grades",
"title": ""
},
{
"docid": "8010b3fdc1c223202157419c4f61bacf",
"text": "Thanks to information explosion, data for the objects of interest can be collected from increasingly more sources. However, for the same object, there usually exist conflicts among the collected multi-source information. To tackle this challenge, truth discovery, which integrates multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. Several truth discovery methods have been proposed for various scenarios, and they have been successfully applied in diverse application domains. In this survey, we focus on providing a comprehensive overview of truth discovery methods, and summarizing them from different aspects. We also discuss some future directions of truth discovery research. We hope that this survey will promote a better understanding of the current progress on truth discovery, and offer some guidelines on how to apply these approaches in application domains.",
"title": ""
},
{
"docid": "2d0d42a6c712d93ace0bf37ffe786a75",
"text": "Personalized search systems tailor search results to the current user intent using historic search interactions. This relies on being able to find pertinent information in that user's search history, which can be challenging for unseen queries and for new search scenarios. Building richer models of users' current and historic search tasks can help improve the likelihood of finding relevant content and enhance the relevance and coverage of personalization methods. The task-based approach can be applied to the current user's search history, or as we focus on here, all users' search histories as so-called \"groupization\" (a variant of personalization whereby other users' profiles can be used to personalize the search experience). We describe a method whereby we mine historic search-engine logs to find other users performing similar tasks to the current user and leverage their on-task behavior to identify Web pages to promote in the current ranking. We investigate the effectiveness of this approach versus query-based matching and finding related historic activity from the current user (i.e., group versus individual). As part of our studies we also explore the use of the on-task behavior of particular user cohorts, such as people who are expert in the topic currently being searched, rather than all other users. Our approach yields promising gains in retrieval performance, and has direct implications for improving personalization in search systems.",
"title": ""
},
{
"docid": "ac4342a829154ebfa7cca35c36619b82",
"text": "We present a new approach to robustly solve photometric stereo problems. We cast the problem of recovering surface normals from multiple lighting conditions as a problem of recovering a low-rank matrix with both missing entries and corrupted entries, which model all types of non-Lambertian effects such as shadows and specularities. Unlike previous approaches that use Least-Squares or heuristic robust techniques, our method uses advanced convex optimization techniques that are guaranteed to find the correct low-rank matrix by simultaneously fixing its missing and erroneous entries. Extensive experimental results demonstrate that our method achieves unprecedentedly accurate estimates of surface normals in the presence of significant amount of shadows and specularities. The new technique can be used to improve virtually any photometric stereo method including uncalibrated photometric stereo.",
"title": ""
},
{
"docid": "17c65d64f360572a6009c5179457fd19",
"text": "This paper presents an unified convolutional neural network (CNN), named AUMPNet, to perform both Action Units (AUs) detection and intensity estimation on facial images with multiple poses. Although there are a variety of methods in the literature designed for facial expression analysis, only few of them can handle head pose variations. Therefore, it is essential to develop new models to work on non-frontal face images, for instance, those obtained from unconstrained environments. In order to cope with problems raised by pose variations, an unique CNN, based on region and multitask learning, is proposed for both AU detection and intensity estimation tasks. Also, the available head pose information was added to the multitask loss as a constraint to the network optimization, pushing the network towards learning better representations. As opposed to current approaches that require ad hoc models for every single AU in each task, the proposed network simultaneously learns AU occurrence and intensity levels for all AUs. The AUMPNet was evaluated on an extended version of the BP4D-Spontaneous database, which was synthesized into nine different head poses and made available to FG 2017 Facial Expression Recognition and Analysis Challenge (FERA 2017) participants. The achieved results surpass the FERA 2017 baseline, using the challenge metrics, for AU detection by 0.054 in F1-score and 0.182 in ICC(3, 1) for intensity estimation.",
"title": ""
},
{
"docid": "d0565bcb93ab719ac1f36e2d8c9dd919",
"text": "Heterogeneity among rivals implies that each firm faces a unique competitive set, despite overlapping market domains. This suggests the utility of a firm-level approach to competitor identification and analysis, particularly under dynamic environmental conditions. We take such an approach in developing a market-based and resource-based framework for scanning complex competitive fields. By facilitating a search for functional similarities among products and resources, the framework reveals relevant commonalities in an otherwise heterogeneous competitive set. Beyond its practical contribution, the paper also advances resource-based theory as a theory of competitive advantage. Most notably, we show that resource substitution conditions not only the sustainability of a competitive advantage, but the attainment of competitive advantage as well. With equifinality among resources of different types, the rareness condition for even temporary competitive advantage must include resource substitutes. It is not rareness in terms of resource type that matters, but rareness in terms of resource functionality. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "d6cca63107e04f225b66e02289c601a2",
"text": "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with a hashtag such as ‘#sarcasm’. We collected a training corpus of about 406 thousand Dutch tweets with hashtag synonyms denoting sarcasm. Assuming that the human labeling is correct (annotation of a sample indicates that about 90% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a sample of a day’s stream of 2.25 million Dutch tweets. Of the 353 explicitly marked tweets on this day, we detect 309 (87%) with the hashtag removed. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 35% of the top250 ranked tweets are indeed sarcastic. Analysis indicates that the use of hashtags reduces the further use of linguistic markers for signaling sarcasm, such as exclamations and intensifiers. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of non-verbal expressions that people employ in live interaction when conveying sarcasm. Checking the consistency of our finding in a language from another language family, we observe that in French the hashtag ‘#sarcasme’ has a similar polarity switching function, be it to a lesser extent. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9924e44d94d00a7a3dbd313409f5006a",
"text": "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty",
"title": ""
},
{
"docid": "3bcd11082fc70d52da15a5e087ab5375",
"text": "The problem of maximizing information diffusion through a network is a topic of considerable recent interest. A conventional problem is to select a set of any arbitrary k nodes as the initial influenced nodes so that they can effectively disseminate the information to the rest of the network. However, this model is usually unrealistic in online social networks since we cannot typically choose arbitrary nodes in the network as the initial influenced nodes. From the point of view of an individual user who wants to spread information as much as possible, a more reasonable model is to try to initially share the information with only some of its neighbours rather than a set of any arbitrary nodes; but how can these neighbours be effectively chosen? We empirically study how to design more effective neighbours selection strategies to maximize information diffusion. Our experimental results through intensive simulation on several real- world network topologies show that an effective neighbours selection strategy is to use node degree information for short-term propagation while a naive random selection is also adequate for long-term propagation to cover more than half of a network. We also discuss the effects of the number of initial activated neighbours. If we particularly select the highest degree nodes as initial activated neighbours, the number of initial activated neighbours is not an important factor at least for long-term propagation of information.",
"title": ""
}
] |
scidocsrr
|
80c4f4c108fd6c075a1d8e50ee7b0fb8
|
Software-Defined and Virtualized Future Mobile and Wireless Networks: A Survey
|
[
{
"docid": "83355e7d2db67e42ec86f81909cfe8c1",
"text": "everal protocols for routing and forwarding in Wireless Mesh Networks (WMN) have been proposed, such as AODV, OLSR or B.A.T.M.A.N. However, providing support for e.g. flow-based routing where flows of one source take different paths through the network is hard to implement in a unified way using traditional routing protocols. OpenFlow is an emerging technology which makes network elements such as routers or switches programmable via a standardized interface. By using virtualization and flow-based routing, OpenFlow enables a rapid deployment of novel packet forwarding and routing algorithms, focusing on fixed networks. We propose an architecture that integrates OpenFlow with WMNs and provides such flow-based routing and forwarding capabilities. To demonstrate the feasibility of our OpenFlow based approach, we have implemented a simple solution to solve the problem of client mobility in a WMN which handles the fast migration of client addresses (e.g. IP addresses) between Mesh Access Points and the interaction with re-routing without the need for tunneling. Measurements from a real mesh testbed (KAUMesh) demonstrate the feasibility of our approach based on the evaluation of forwarding performance, control traffic and rule activation time.",
"title": ""
},
{
"docid": "4d66a85651a78bfd4f7aba290c21f9a7",
"text": "Mobile carrier networks follow an architecture where network elements and their interfaces are defined in detail through standardization, but provide limited ways to develop new network features once deployed. In recent years we have witnessed rapid growth in over-the-top mobile applications and a 10-fold increase in subscriber traffic while ground-breaking network innovation took a back seat. We argue that carrier networks can benefit from advances in computer science and pertinent technology trends by incorporating a new way of thinking in their current toolbox. This article introduces a blueprint for implementing current as well as future network architectures based on a software-defined networking approach. Our architecture enables operators to capitalize on a flow-based forwarding model and fosters a rich environment for innovation inside the mobile network. In this article, we validate this concept in our wireless network research laboratory, demonstrate the programmability and flexibility of the architecture, and provide implementation and experimentation details.",
"title": ""
}
] |
[
{
"docid": "5c3358aa3d9a931ba7c9186b1f5a2362",
"text": "Compared with word-level and sentence-level convolutional neural networks (ConvNets), the character-level ConvNets has a better applicability for misspellings and typos input. Due to this, recent researches for text classification mainly focus on character-level ConvNets. However, while the majority of these researches employ English corpus for the character-level text classification, few researches have been done using Chinese corpus. This research hopes to bridge this gap, exploring character-level ConvNets for Chinese corpus test classification. We have constructed a large-scale Chinese dataset, and the result shows that character-level ConvNets works better on Chinese character dataset than its corresponding pinyin format dataset, which is the general solution in previous researches. This is the first time that character-level ConvNets has been applied to Chinese character dataset for text classification problem.",
"title": ""
},
{
"docid": "8147143579de86a5eeb668037c2b8c5d",
"text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.",
"title": ""
},
{
"docid": "f63da8e7659e711bcb7a148ea12a11f2",
"text": "We have presented two CCA-based approaches for data fusion and group analysis of biomedical imaging data and demonstrated their utility on fMRI, sMRI, and EEG data. The results show that CCA and M-CCA are powerful tools that naturally allow the analysis of multiple data sets. The data fusion and group analysis methods presented are completely data driven, and use simple linear mixing models to decompose the data into their latent components. Since CCA and M-CCA are based on second-order statistics they provide a relatively lessstrained solution as compared to methods based on higherorder statistics such as ICA. While this can be advantageous, the flexibility also tends to lead to solutions that are less sparse than those obtained using assumptions of non-Gaussianity-in particular superGaussianity-at times making the results more difficult to interpret. Thus, it is important to note that both approaches provide complementary perspectives, and hence it is beneficial to study the data using different analysis techniques.",
"title": ""
},
{
"docid": "1a9670cc170343073fba2a5820619120",
"text": "Occlusions present a great challenge for pedestrian detection in practical applications. In this paper, we propose a novel approach to simultaneous pedestrian detection and occlusion estimation by regressing two bounding boxes to localize the full body as well as the visible part of a pedestrian respectively. For this purpose, we learn a deep convolutional neural network (CNN) consisting of two branches, one for full body estimation and the other for visible part estimation. The two branches are treated differently during training such that they are learned to produce complementary outputs which can be further fused to improve detection performance. The full body estimation branch is trained to regress full body regions for positive pedestrian proposals, while the visible part estimation branch is trained to regress visible part regions for both positive and negative pedestrian proposals. The visible part region of a negative pedestrian proposal is forced to shrink to its center. In addition, we introduce a new criterion for selecting positive training examples, which contributes largely to heavily occluded pedestrian detection. We validate the effectiveness of the proposed bi-box regression approach on the Caltech and CityPersons datasets. Experimental results show that our approach achieves promising performance for detecting both non-occluded and occluded pedestrians, especially heavily occluded ones.",
"title": ""
},
{
"docid": "209203c297898a2251cfd62bdfc37296",
"text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
"title": ""
},
{
"docid": "aecaa8c028c4d1098d44d755344ad2fc",
"text": "It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the wellaccepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.",
"title": ""
},
{
"docid": "c45b962006b2bb13ab57fe5d643e2ca6",
"text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.",
"title": ""
},
{
"docid": "c330e97f4c7c3478670e55991ac2293c",
"text": "The MoveLab was an educational research intervention centering on a community of African American and Hispanic girls as they began to transform their self-concept in relation to computing and dance while creating technology enhanced dance performances. Students within underrepresented populations in computing often do not perceive the identity of a computer scientist as aligning with their interests or value system, leading to rejection of opportunities to participate within the discipline. To engage diverse populations in computing, we need to better understand how to support students in navigating conflicts between identities with computing and their personal interest and values. Using the construct of self-concept, we observed students in the workshop creating both congruence and dissension between their self-concept and computing. We found that creating multiple roles for participation, fostering a socially supportive community, and integrating student values within the curriculum led to students forming congruence between their self-concept and the disciplines of computing and dance.",
"title": ""
},
{
"docid": "f7792dbc29356711c2170d5140030142",
"text": "A C-Ku band GaN monolithic microwave integrated circuit (MMIC) transmitter/receiver (T/R) frontend module with a novel RF interface structure has been successfully developed by using multilayer ceramics technology. This interface improves the insertion loss with wideband characteristics operating up to 40 GHz. The module contains a GaN power amplifier (PA) with output power higher than 10 W over 6–18 GHz and a GaN low-noise amplifier (LNA) with a gain of 15.9 dB over 3.2–20.4 GHz and noise figure (NF) of 2.3–3.7 dB over 4–18 GHz. A fabricated T/R module occupying only 12 × 30 mm2 delivers an output power of 10 W up to the Ku-band. To our knowledge, this is the first demonstration of a C-Ku band T/R frontend module using GaN MMICs with wide bandwidth, 10W output power, and small size operating up to the Ku-band.",
"title": ""
},
{
"docid": "01c6476bfa806af6c35898199ad9c169",
"text": "This paper presents nonlinear tracking control systems for a quadrotor unmanned aerial vehicle under the influence of uncertainties. Assuming that there exist unstructured disturbances in the translational dynamics and the attitude dynamics, a geometric nonlinear adaptive controller is developed directly on the special Euclidean group. In particular, a new form of an adaptive control term is proposed to guarantee stability while compensating the effects of uncertainties in quadrotor dynamics. A rigorous mathematical stability proof is given. The desirable features are illustrated by numerical example and experimental results of aggressive maneuvers.",
"title": ""
},
{
"docid": "262c11ab9f78e5b3f43a31ad22cf23c5",
"text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.",
"title": ""
},
{
"docid": "2f8f1f2db01eeb9a47591e77bb1c835a",
"text": "We present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.",
"title": ""
},
{
"docid": "ec0d1addabab76d9c2bd044f0bfe3153",
"text": "Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author’s influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.",
"title": ""
},
{
"docid": "76c7b343d2f03b64146a0d6ed2d60668",
"text": "Three important stages within automated 3D object reconstruction via multi-image convergent photogrammetry are image pre-processing, interest point detection for feature-based matching and triangular mesh generation. This paper investigates approaches to each of these. The Wallis filter is initially examined as a candidate image pre-processor to enhance the performance of the FAST interest point operator. The FAST algorithm is then evaluated as a potential means to enhance the speed, robustness and accuracy of interest point detection for subsequent feature-based matching. Finally, the Poisson Surface Reconstruction algorithm for wireframe mesh generation of objects with potentially complex 3D surface geometry is evaluated. The outcomes of the investigation indicate that the Wallis filter, FAST interest operator and Poisson Surface Reconstruction algorithms present distinct benefits in the context of automated image-based object reconstruction. The reported investigation has advanced the development of an automatic procedure for high-accuracy point cloud generation in multi-image networks, where robust orientation and 3D point determination has enabled surface measurement and visualization to be implemented within a single software system.",
"title": ""
},
{
"docid": "f8ba12d3fd6ebf65429a2ce5f5143dbd",
"text": "The contour-guided color palette (CCP) is proposed for robust image segmentation. It efficiently integrates contour and color cues of an image. To find representative colors of an image, color samples along long contours between regions, similar in spirit to machine learning methodology that focus on samples near decision boundaries, are collected followed by the mean-shift (MS) algorithm in the sampled color space to achieve an image-dependent color palette. This color palette provides a preliminary segmentation in the spatial domain, which is further fine-tuned by post-processing techniques such as leakage avoidance, fake boundary removal, and small region mergence. Segmentation performances of CCP and MS are compared and analyzed. While CCP offers an acceptable standalone segmentation result, it can be further integrated into the framework of layered spectral segmentation to produce a more robust segmentation. The superior performance of CCP-based segmentation algorithm is demonstrated by experiments on the Berkeley Segmentation Dataset.",
"title": ""
},
{
"docid": "a21f04b6c8af0b38b3b41f79f2661fa6",
"text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.",
"title": ""
},
{
"docid": "34ba1323c4975a566f53e2873231e6ad",
"text": "This paper describes the motivation, the realization, and the experience of incorporating simulation and hardware implementation into teaching computer organization and architecture to computer science students. It demonstrates that learning by doing has helped students to truly understand how a computer is constructed and how it really works in practice. Correlated with textbook material, a set of simulation and implementation projects were created on the basis of the work that students had done in previous homework and laboratory activities. Students can thus use these designs as building blocks for completing more complex projects at a later time. The projects cover a wide range of topics from simple adders up to ALU's and CPU's. These processors operate in a virtual manner on certain short assembly-language programs. Specifically, this paper shares the experience of using simulation tools (Alterareg Quartus II) and reconfigurable hardware prototyping platforms (Alterareg UP2 development boards)",
"title": ""
},
{
"docid": "8e1befc4318a2dd32d59acac49e2374c",
"text": "The use of Social Network Sites (SNS) is increasing nowadays especially by the younger generations. The availability of SNS allows users to express their interests, feelings and share daily routine. Many researchers prove that using user-generated content (UGC) in a correct way may help determine people's mental health levels. Mining the UGC could help to predict the mental health levels and depression. Depression is a serious medical illness, which interferes most with the ability to work, study, eat, sleep and having fun. However, from the user profile in SNS, we can collect all the information that relates to person's mood, and negativism. In this research, our aim is to investigate how SNS user's posts can help classify users according to mental health levels. We propose a system that uses SNS as a source of data and screening tool to classify the user using artificial intelligence according to the UGC on SNS. We created a model that classify the UGC using two different classifiers: Support Vector Machine (SVM), and Naïve Bayes.",
"title": ""
},
{
"docid": "601488a8e576d465a0bddd65a937c5c8",
"text": "Human activity recognition is an area of growing interest facilitated by the current revolution in body-worn sensors. Activity recognition allows applications to construct activity profiles for each subject which could be used effectively for healthcare and safety applications. Automated human activity recognition systems face several challenges such as number of sensors, sensor precision, gait style differences, and others. This work proposes a machine learning system to automatically recognise human activities based on a single body-worn accelerometer. The in-house collected dataset contains 3D acceleration of 50 subjects performing 10 different activities. The dataset was produced to ensure robustness and prevent subject-biased results. The feature vector is derived from simple statistical features. The proposed method benefits from RGB-to-YIQ colour space transform as kernel to transform the feature vector into more discriminable features. The classification technique is based on an adaptive boosting ensemble classifier. The proposed system shows consistent classification performance up to 95% accuracy among the 50 subjects.",
"title": ""
},
{
"docid": "6c3f320eda59626bedb2aad4e527c196",
"text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.",
"title": ""
}
] |
scidocsrr
|
8b3c7506d8891c99989e609d22e87525
|
Malware phylogeny generation using permutations of code
|
[
{
"docid": "8014c32fa820e1e2c54e1004b62dc33e",
"text": "Signature-based malicious code detection is the standard technique in all commercial anti-virus software. This method can detect a virus only after the virus has appeared and caused damage. Signature-based detection performs poorly whe n attempting to identify new viruses. Motivated by the standard signature-based technique for detecting viruses, and a recent successful text classification method, n-grams analysis, we explo re the idea of automatically detecting new malicious code. We employ n-grams analysis to automatically generate signatures from malicious and benign software collections. The n-gramsbased signatures are capable of classifying unseen benign and malicious code. The datasets used are large compared to earlier applications of n-grams analysis.",
"title": ""
},
{
"docid": "660d04be7ce665aa48c65e16af0f373f",
"text": "In this paper, we describe the development of a fielded application for detecting malicious executables in the wild. We gathered 1971 benign and 1651 malicious executables and encoded each as a training example using n-grams of byte codes as features. Such processing resulted in more than 255 million distinct n-grams. After selecting the most relevant n-grams for prediction, we evaluated a variety of inductive methods, including naive Bayes, decision trees, support vector machines, and boosting. Ultimately, boosted decision trees outperformed other methods with an area under the roc curve of 0.996. Results also suggest that our methodology will scale to larger collections of executables. To the best of our knowledge, ours is the only fielded application for this task developed using techniques from machine learning and data mining.",
"title": ""
}
] |
[
{
"docid": "7b6bbc2215f5ab8f2932aad3251c041a",
"text": "Given small profit margins, independently owned and operated restaurants are highly sensitive to insider fraud and yet have scant resources to combat the problem. This paper is the first open research to apply machine learning (ML) techniques to detecting insider fraud in point-of-sales transaction data in the restaurant industry. We show that after applying under-sampling techniques and carefully engineering features, ML can deliver very high fraud-detection performance. Knowledge about engineered features, algorithm selection, performance, and tuning gained from this research can be applied in future research on fraud detection of restaurant data.",
"title": ""
},
{
"docid": "cf7b17b690258dc50ec12bfbd9de232d",
"text": "In this paper, we propose a novel method for visual object tracking called HMMTxD. The method fuses observations from complementary out-of-the box trackers and a detector by utilizing a hidden Markov model whose latent states correspond to a binary vector expressing the failure of individual trackers. The Markov model is trained in an unsupervised way, relying on an online learned detector to provide a source of tracker-independent information for a modified BaumWelch algorithm that updates the model w.r.t. the partially annotated data. We show the effectiveness of the proposed method on combination of two and three tracking algorithms. The performance of HMMTxD is evaluated on two standard benchmarks (CVPR2013 and VOT) and on a rich collection of 77 publicly available sequences. The HMMTxD outperforms the state-of-the-art, often significantly, on all datasets in almost all criteria.",
"title": ""
},
{
"docid": "f5317ef27de34a1f63181c6b2508c07c",
"text": "In this paper, we present the design and implementation of FocusVR, a system for effectively and efficiently reducing the power consumption of Virtual Reality (VR) devices by smartly dimming their displays. These devices are becoming increasingly common with large companies such as Facebook (Oculus Rift), and HTC and Valve (Vive), recently releasing high quality VR devices to the consumer market. However, these devices require increasingly higher screen resolutions and refresh rates to be effective, and this in turn, leads to high display power consumption costs. We show how the use of smart dimming techniques, vignettes and color mapping, can significantly reduce the power consumption of VR displays with minimal impact on usability. In particular, we describe the implementation of FocusVR in both Android and the Unity game engine and then present detailed measurement results across 3 different VR devices -- the Gear VR, the DK2, and the Vive. In addition, we present the results of 3 user studies, with 68 participants in total, that tested the usability of FocusVR. Overall, we show that FocusVR is able to save up to 80% of the display power and up to 50% of the overall system power, with negligible impact to usability.",
"title": ""
},
{
"docid": "60ce335cf0a0b349de2bf70b7b1350db",
"text": "Robotic systems are being used for gait rehabilitation of patients with neurological disorders. These are externally powered devices that can apply external forces on human limbs to assist the limb motion. Human walking pattern involves repetitive and well coordinated lower limb movements. A cable driven leg exoskeleton (CDLE) uses actuated cables to apply external torques at anatomical hip and knee joints. However, a cable can apply only pulling force on a body which limits a cable driven system functionality compared to a conventional robotic manipulator. Noting that a CDLE is proposed to assist in complex lower limb motion during walking We present workspace analysis of CDLE considering planar and spatial leg model. Human walking data were used for the analysis and to study the feasibility of CDLE architecture for human gait rehabilitation.1",
"title": ""
},
{
"docid": "c3b07d5c9a88c1f9430615d5e78675b6",
"text": "Two new algorithms and associated neuron-like network architectures are proposed for solving the eigenvalue problem in real-time. The first approach is based on the solution of a set of nonlinear algebraic equations by employing optimization techniques. The second approach employs a multilayer neural network with linear artificial neurons and it exploits the continuous-time error back-propagation learning algorithm. The second approach enables us to find all the eigenvalues and the associated eigenvectors simultaneously by training the network to match some desired patterns, while the first approach is suitable to find during one run only one particular eigenvalue (e.g. an extreme eigenvalue) and the corresponding eigenvector in realtime. In order to find all eigenpairs the optimization process must be repeated in this case many times for different initial conditions. The performance and convergence behaviour of the proposed neural network architectures are investigated by extensive computer simulations.",
"title": ""
},
{
"docid": "7b551cd177701529a3da152332f38f63",
"text": "RATIONAL AND BACKGROUND\nTraditional Thai massage (TTM) is an alternative medicine treatment used for pain relief. The purpose of this paper is to provide a systematic review of the research about the effects of TTM on pain intensity and other important outcomes in individuals with chronic pain.\n\n\nMETHODS\nWe performed a systematic review of the controlled trials of the effects of TTM, using the keywords \"Traditional Thai massage\" or \"Thai massage\" with the keyword \"Chronic pain.\"\n\n\nRESULTS\nSix research articles met the inclusion criteria. All of the studies found a pre- to post-treatment pain reductions, varying from 25% to 80% and was also associated with improvements in disability, perceived muscle tension, flexibility and anxiety.\n\n\nSUMMARY\nThe TTM benefits of pain reduction appear to maintain for up to 15 weeks. Additional research is needed to identify the moderators, mediators and to determine the long-term benefits of TTM relative to control conditions.",
"title": ""
},
{
"docid": "90b2c540f30d263b3861c9218c4a99c8",
"text": "STUDY OBJECTIVE\nThe aim of the study was to compare the incidence of the use of additional uterotonics before and after the change of carbetocin to oxytocin for the prevention of postpartum hemorrhage after cesarean delivery in women with severe preeclampsia.\n\n\nDESIGN\nThis was an observational retrospective before-and-after study.\n\n\nSETTING\nOperating room, postoperative recovery area.\n\n\nPATIENTS\nSixty women with severe preeclampsia undergoing cesarean delivery under spinal anesthesia; American Society of Anesthesiologists 3.\n\n\nINTERVENTIONS\nObservational study.\n\n\nMEASUREMENTS\nBlood pressure, heart rate, and biological data (hemoglobin, platelets, haptoglobin, prothrombin time index, activated partial thromboplastin time ratio, blood uric acid, aspartate aminotransferase, alanine aminotransferase, serum urea, serum creatinine, and albumin).\n\n\nMAIN RESULTS\nThe incidence of additional uterotonic administration in the carbetocin and oxytocin groups was 15% and 10%, respectively (P=.70).\n\n\nCONCLUSIONS\nAs carbetocin appears to be as effective and safe as oxytocin in preeclamptic women, its advantages make it a good uterotonic option in this particular setting.",
"title": ""
},
{
"docid": "048081246f39fc80273d08493c770016",
"text": "Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other’s thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others. Keyword: Skin segmentation; Thresholding technique; Skin detection; Color space",
"title": ""
},
{
"docid": "16880162165f4c95d6b01dc4cfc40543",
"text": "In this paper we present CMUcam3, a low-cost, open source, em bedded computer vision platform. The CMUcam3 is the third generation o f the CMUcam system and is designed to provide a flexible and easy to use ope n source development environment along with a more powerful hardware platfo rm. The goal of the system is to provide simple vision capabilities to small emb dded systems in the form of an intelligent sensor that is supported by an open sou rce community. The hardware platform consists of a color CMOS camera, a frame bu ff r, a low cost 32bit ARM7TDMI microcontroller, and an MMC memory card slot. T he CMUcam3 also includes 4 servo ports, enabling one to create entire, w orking robots using the CMUcam3 board as the only requisite robot processor. Cus tom C code can be developed using an optimized GNU toolchain and executabl es can be flashed onto the board using a serial port without external download ing hardware. The development platform includes a virtual camera target allowi ng for rapid application development exclusively on a PC. The software environment c omes with numerous open source example applications and libraries includi ng JPEG compression, frame differencing, color tracking, convolutions, histog ramming, edge detection, servo control, connected component analysis, FAT file syste m upport, and a face detector.",
"title": ""
},
{
"docid": "08d9b5af2c9d8095bf6a6b3453c89f40",
"text": "Alzheimer's disease (AD) is a neurodegenerative disorder associated with loss of memory and cognitive abilities. Previous evidence suggested that exercise ameliorates learning and memory deficits by increasing brain derived neurotrophic factor (BDNF) and activating downstream pathways in AD animal models. However, upstream pathways related to increase BDNF induced by exercise in AD animal models are not well known. We investigated the effects of moderate treadmill exercise on Aβ-induced learning and memory impairment as well as the upstream pathway responsible for increasing hippocampal BDNF in an animal model of AD. Animals were divided into five groups: Intact, Sham, Aβ1-42, Sham-exercise (Sham-exe) and Aβ1-42-exercise (Aβ-exe). Aβ was microinjected into the CA1 area of the hippocampus and then animals in the exercise groups were subjected to moderate treadmill exercise (for 4 weeks with 5 sessions per week) 7 days after microinjection. In the present study the Morris water maze (MWM) test was used to assess spatial learning and memory. Hippocampal mRNA levels of BDNF, peroxisome proliferator-activated receptor gamma co-activator 1 alpha (PGC-1α), fibronectin type III domain-containing 5 (FNDC5) as well as protein levels of AMPK-activated protein kinase (AMPK), PGC-1α, BDNF, phosphorylation of AMPK were measured. Our results showed that intra-hippocampal injection of Aβ1-42 impaired spatial learning and memory which was accompanied by reduced AMPK activity (p-AMPK/total-AMPK ratio) and suppression of the PGC-1α/FNDC5/BDNF pathway in the hippocampus of rats. In contrast, moderate treadmill exercise ameliorated the Aβ1-42-induced spatial learning and memory deficit, which was accompanied by restored AMPK activity and PGC-1α/FNDC5/BDNF levels. Our results suggest that the increased AMPK activity and up-regulation of the PGC-1α/FNDC5/BDNF pathway by exercise are likely involved in mediating the beneficial effects of exercise on Aβ-induced learning and memory impairment.",
"title": ""
},
{
"docid": "ee7404e6545e12bb111a402c3571465d",
"text": "Natural language understanding and dialog management are two integral components of interactive dialog systems. Previous research has used machine learning techniques to individually optimize these components, with different forms of direct and indirect supervision. We present an approach to integrate the learning of both a dialog strategy using reinforcement learning, and a semantic parser for robust natural language understanding, using only natural dialog interaction for supervision. Experimental results on a simulated task of robot instruction demonstrate that joint learning of both components improves dialog performance over learning either of these components alone.",
"title": ""
},
{
"docid": "a4788b60b0fc16551f03557483a8a532",
"text": "The rapid growth in the population density in urban cities demands tolerable provision of services and infrastructure. To meet the needs of city inhabitants. Thus, increase in the request for embedded devices, such as sensors, actuators, and smartphones, etc., which is providing a great business potential towards the new era of Internet of Things (IoT); in which all the devices are capable of interconnecting and communicating with each other over the Internet. Therefore, the Internet technologies provide a way towards integrating and sharing a common communication medium. Having such knowledge, in this paper, we propose a combined IoT-based system for smart city development and urban planning using Big Data analytics. We proposed a complete system, which consists of various types of sensors deployment including smart home sensors, vehicular networking, weather and water sensors, smart parking sensors, and surveillance objects, etc. A four-tier architecture is proposed which include 1) Bottom Tier-1: which is responsible for IoT sources, data generations, and collections 2) Intermediate Tier-1: That is responsible for all type of communication between sensors, relays, base stations, the internet, etc. 3) Intermediate Tier 2: it is responsible for data management and processing using Hadoop framework, and 4) Top tier: is responsible for application and usage of the data analysis and results generated. The system implementation consists of various steps that start from data generation and collecting, aggregating, filtration, classification, preprocessing, computing and decision making. The proposed system is implemented using Hadoop with Spark, voltDB, Storm or S4 for real time processing of the IoT data to generate results in order to establish the smart city. For urban planning or city future development, the offline historical data is analyzed on Hadoop using MapReduce programming. IoT datasets generated by smart homes, smart parking weather, pollution, and vehicle data sets are used for analysis and evaluation. Such type of system with full functionalities does not exist. Similarly, the results show that the proposed system is more scalable and efficient than the existing systems. Moreover, the system efficiency is measured in term of throughput and processing time.",
"title": ""
},
{
"docid": "39e71a3228331eb8b1574173cfb1e04a",
"text": "Euler Number is one of the most important characteristics in topology. In two-dimension digital images, the Euler characteristic is locally computable. The form of Euler Number formula is different under 4-connected and 8-connected conditions. Based on the definition of the Foreground Segment and Neighbor Number, a formula of the Euler Number computing is proposed and is proved in this paper. It is a new idea to locally compute Euler Number of 2D image.",
"title": ""
},
{
"docid": "385e50da85d4d6b4ec2cdc2ed7309ce8",
"text": "This paper presents a novel reconfigurable framework for training Convolutional Neural Networks (CNNs). The proposed framework is based on reconfiguring a streaming datapath at runtime to cover the training cycle for the various layers in a CNN. The streaming datapath can support various parameterized modules which can be customized to produce implementations with different trade-offs in performance and resource usage. The modules follow the same input and output data layout, simplifying configuration scheduling. For different layers, instances of the modules contain different computation kernels in parallel, which can be customized with different layer configurations and data precision. The associated models on performance, resource and bandwidth can be used in deriving parameters for the datapath to guide the analysis of design trade-offs to meet application requirements or platform constraints. They enable estimation of the implementation specifications given different layer configurations, to maximize performance under the constraints on bandwidth and hardware resources. Experimental results indicate that the proposed module design targeting Maxeler technology can achieve a performance of 62.06 GFLOPS for 32-bit floating-point arithmetic, outperforming existing accelerators. Further evaluation based on training LeNet-5 shows that the proposed framework achieves about 4 times faster than CPU implementation of Caffe and about 7.5 times more energy efficient than the GPU implementation of Caffe.",
"title": ""
},
{
"docid": "a99c8d5b74e2470b30706b57fd96868d",
"text": "Implant restorations have become a primary treatment option for the replacement of congenitally missing lateral incisors. The central incisor and canine often erupt in less than optimal positions adjacent to the edentulous lateral incisor space, and therefore preprosthetic orthodontic treatment is frequently required. Derotation of the central incisor and canine, space closure and correction of root proximities may be required to create appropriate space in which to place the implant and achieve an esthetic restoration. This paper discusses aspects of preprosthetic orthodontic diagnosis and treatment that need to be considered with implant restorations.",
"title": ""
},
{
"docid": "855a8cfdd9d01cd65fe32d18b9be4fdf",
"text": "Interest in business intelligence and analytics education has begun to attract IS scholars’ attention. In order to discover new research questions, there is a need for conducting a literature review of extant studies on BI&A education. This study identified 44 research papers through using Google Scholar related to BI&A education. This research contributes to the field of BI&A education by (a) categorizing the existing studies on BI&A education into the key five research foci, and (b) identifying the research gaps and providing the guide for future BI&A and IS research.",
"title": ""
},
{
"docid": "7e5415fd007bfe74a469c6b6dbfb2419",
"text": "In this thesis, I explore a reinforcement learning technique for improving bounding box localizations of objects in images. The model takes as input a bounding box already known to overlap an object and aims to improve the fit of the box through a series of transformations that shift the location of the box by translation, or change its size or aspect ratio. Over the course of these actions, the model adapts to new information extracted from the image. This active localization approach contrasts with existing bounding-box regression methods, which extract information from the image only once. I implement, train, and test this reinforcement learning model using data taken from the Portland State Dog-Walking image set [12]. The model balances exploration with exploitation in training using an -greedy policy. I find that the performance of the model is sensitive to the -greedy configuration used during training, performing best when the epsilon parameter is set to very low values over the course of training. With = 0.01, I find the algorithm can improve bounding boxes in about 78% of test cases for the ‘dog’ object category, and 76% for the ‘human’ category.",
"title": ""
},
{
"docid": "65f520d865de2ce9cfbed043c0822228",
"text": "Container based virtualization is rapidly growing in popularity for cloud deployments and applications as a virtualization alternative due to the ease of deployment coupled with high-performance. Emerging byte-addressable, nonvolatile memories, commonly called Storage Class Memory or SCM, technologies are promising both byte-addressability and persistence near DRAM speeds operating on the main memory bus. These new memory alternatives open up a new realm of applications that no longer have to rely on slow, block-based persistence, but can rather operate directly on persistent data using ordinary loads and stores through the cache hierarchy coupled with transaction techniques. However, SCM presents a new challenge for container-based applications, which typically access persistent data through layers of block based file isolation. Traditional persistent data accesses in containers are performed through layered file access, which slows byte-addressable persistence and transactional guarantees, or through direct access to drivers, which do not provide for isolation guarantees or security. This paper presents a high-performance containerized version of byte-addressable, non-volatile memory (SCM) for applications running inside a container that solves performance challenges while providing isolation guarantees. We created an open-source container-aware Linux loadable Kernel Module (LKM) called Containerized Storage Class Memory, or CSCM, that presents SCM for application isolation and ease of portability. We performed evaluation using microbenchmarks, STREAMS, and Redis, a popular in-memory data structure store, and found our CSCM driver has near the same memory throughput for SCM applications as a non-containerized application running on a host and much higher throughput than persistent in-memory applications accessing SCM through Docker Storage or Volumes.",
"title": ""
},
{
"docid": "0f87fefbe2cfc9893b6fc490dd3d40b7",
"text": "With the tremendous amount of textual data available in the Internet, techniques for abstractive text summarization become increasingly appreciated. In this paper, we present work in progress that tackles the problem of multilingual text summarization using semantic representations. Our system is based on abstract linguistic structures obtained from an analysis pipeline of disambiguation, syntactic and semantic parsing tools. The resulting structures are stored in a semantic repository, from which a text planning component produces content plans that go through a multilingual generation pipeline that produces texts in English, Spanish, French, or German. In this paper we focus on the lingusitic components of the summarizer, both analysis and generation.",
"title": ""
},
{
"docid": "74b538c7c8f22d9b10822dd303335528",
"text": "Context-aware recommender systems extend traditional recommender systems by adapting their output to users’ specific contextual situations. Most of the existing approaches to context-aware recommendation involve directly incorporating context into standard recommendation algorithms (e.g., collaborative filtering, matrix factorization). In this paper, we highlight the importance of context similarity and make the attempt to incorporate it into context-aware recommender. The underlying assumption behind is that the recommendation lists should be similar if their contextual situations are similar. We integrate context similarity with sparse linear recommendation model to build a similarity-learning model. Our experimental evaluation demonstrates that the proposed model is able to outperform several state-of-the-art context-aware recommendation algorithms for the top-N recommendation task.",
"title": ""
}
] |
scidocsrr
|
ddce468e4241038aaab7d7ef00a1577b
|
Training Skinny Deep Neural Networks with Iterative Hard Thresholding Methods
|
[
{
"docid": "0a14a4d38f1f05aec6e0ea5d658defcf",
"text": "In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.",
"title": ""
},
{
"docid": "f27cf894faef9a475b011f44fbf57777",
"text": "Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNet’s feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model on unaugmented datasets.",
"title": ""
}
] |
[
{
"docid": "75fcc3987407274148485394acf8856b",
"text": "Here we critically review studies that used electroencephalography (EEG) or event-related potential (ERP) indices as a biomarker of Alzheimer's disease. In the first part we overview studies that relied on visual inspection of EEG traces and spectral characteristics of EEG. Second, we survey analysis methods motivated by dynamical systems theory (DST) as well as more recent network connectivity approaches. In the third part we review studies of sleep. Next, we compare the utility of early and late ERP components in dementia research. In the section on mismatch negativity (MMN) studies we summarize their results and limitations and outline the emerging field of computational neurology. In the following we overview the use of EEG in the differential diagnosis of the most common neurocognitive disorders. Finally, we provide a summary of the state of the field and conclude that several promising EEG/ERP indices of synaptic neurotransmission are worth considering as potential biomarkers. Furthermore, we highlight some practical issues and discuss future challenges as well.",
"title": ""
},
{
"docid": "23d8b5456eb169d24b58b76d8af42c82",
"text": "Learning interpretable features from complex multilayer networks is a challenging and important problem. The need for such representations is particularly evident in multilayer networks of the brain, where nodal characteristics may help model and differentiate regions of the brain according to individual, cognitive task, or disease. Motivated by this problem, we introduce the multi-node2vec algorithm, an efficient and scalable feature engineering method that automatically learns continuous node feature representations from multilayer networks. Multi-node2vec relies upon a second-order random walk sampling procedure that efficiently explores the innerand intralayer ties of the observed multilayer network is utilized to identify multilayer neighborhoods. Maximum likelihood estimators of the nodal features are identified through the use of the Skip-gram neural network model on the collection of sampled neighborhoods. We investigate the conditions under which multi-node2vec is an approximation of a closedform matrix factorization problem. We demonstrate the efficacy of multi-node2vec on a multilayer functional brain network from resting state fMRI scans over a group of 74 healthy individuals. We find that multi-node2vec outperforms contemporary methods on complex networks, and that multi-node2vec identifies nodal characteristics that closely associate with the functional organization of the brain.",
"title": ""
},
{
"docid": "2b08148f3d725fa711ae7c7658c17f6a",
"text": "A rapid and selective high-performance liquid chromatographic (HPLC) method is developed for the separation and determination of caffeine, theobromine, and theophylline. The chromatography is performed on a Zorbax Eclipse XDB-C8 column (4.6x150 mm i.d., 5-microm particle size) at 25 degrees C, with a mobile phase of water-THF (0.1% THF in water, pH 8)-acetonitrile (90:10, v/v). The flow rate is 0.8 mL/min, and detection is by UV at 273 nm. This method permits the simultaneous determination of caffeine, theobromine, and theophylline in food, drinks, and herbal products with detection limits of 0.07-0.2 mg/L and recoveries of 100.20-100.42%. Correlation coefficients, for the calibration curves in the linear range of 0.2-100 mg/L, are greater than 0.9999 for all compounds. The within- and between-day precision is determined for both retention times and peak area. The data suggests that the proposed HPLC method can be used for routine quality control of food, drinks, and herbal products.",
"title": ""
},
{
"docid": "44ffc62b4ac312183023c3ed11e0acbb",
"text": "From available literature, J.P Clark-Bekederemo's poetry has not been extensively studied from a linguistic perspective. Previous studies on the poet's work have concentrated on the literary and thematic features of the texts. The present study, therefore, examines mood structures (i.e. a grammatical category that pertains to the clause), in the poetry, in order to determine how language is used to express the manner of speaking of interlocutors, and their roles, judgments and attitudes in specific discourse contexts. Through the aid of Halliday's systemic functional Grammar, particularly the tenor aspect of the interpersonal ‘metafunction’ (other metafunctions being ideational and textual), the study highlights the nature of dialogue (i.e. mood structures) between interactants in the poetry, in relation to social contexts. The discourse-stylistic approach adopted for the study, enables us to examine what is communicated (i.e. discourse) and how it is communicated (i.e. stylistics).",
"title": ""
},
{
"docid": "8bda6d13feb9636028d08c081d0af0b1",
"text": "It is generally challenging to tell apart malware from benign applications. To make this decision, human analysts are frequently interested in runtime values: targets of reflective method calls, URLs to which data is sent, target telephone numbers of SMS messages, and many more. However, obfuscation and string encryption, used by malware as well as goodware, often not only render human inspections, but also static analyses ineffective. In addition, malware frequently tricks dynamic analyses by detecting the execution environment emulated by the analysis tool and then refraining from malicious behavior. In this work we therefore present HARVESTER, an approach to fully automatically extract runtime values from Android applications. HARVESTER is designed to extract values even from highly obfuscated state-of-the-art malware samples that obfuscate method calls using reflection, hide sensitive values in native code, load code dynamically and apply anti-analysis techniques. The approach combines program slicing with code generation and dynamic execution. Experiments on 16,799 current malware samples show that HARVESTER fully automatically extracts many sensitive values, with perfect precision. The process usually takes less than three minutes and does not require human interaction. In particular, it goes without simulating UI inputs. Two case studies further show that by integrating the extracted values back into the app, HARVESTER can increase the recall of existing static and dynamic analysis tools such as FlowDroid and TaintDroid.",
"title": ""
},
{
"docid": "c8a9f16259e437cda8914a92148901ab",
"text": "A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts. Research in natural language dialogue should focus on designing communicative agents which can integrate themselves into these contexts and productively collaborate with humans. In this abstract, we propose a general situated language learning paradigm which is designed to bring about robust language agents able to cooperate productively with humans. This dialogue paradigm is built on a utilitarian definition of language understanding. Language is one of multiple tools which an agent may use to accomplish goals in its environment. We say an agent “understands” language only when it is able to use language productively to accomplish these goals. Under this definition, an agent’s communication success reduces to its success on tasks within its environment. This setup contrasts with many conventional natural language tasks, which maximize linguistic objectives derived from static datasets. Such applications often make the mistake of reifying language as an end in itself. The tasks prioritize an isolated measure of linguistic intelligence (often one of linguistic competence, in the sense of Chomsky (1965)), rather than measuring a model’s effectiveness in real-world scenarios. Our utilitarian definition is motivated by recent successes in reinforcement learning methods. In a reinforcement learning setting, agents maximize success metrics on real-world tasks, without requiring direct supervision of linguistic behavior.",
"title": ""
},
{
"docid": "ec230707da4dc2085863fffb990e5259",
"text": "We propose a novel method for movement assistance that is based on adaptive oscillators, i.e., mathematical tools that are capable of extracting the high-level features (amplitude, frequency, and offset) of a periodic signal. Such an oscillator acts like a filter on these features, but keeps its output in phase with respect to the input signal. Using a simple inverse model, we predicted the torque produced by human participants during rhythmic flexion extension of the elbow. Feeding back a fraction of this estimated torque to the participant through an elbow exoskeleton, we were able to prove the assistance efficiency through a marked decrease of the biceps and triceps electromyography. Importantly, since the oscillator adapted to the movement imposed by the user, the method flexibly allowed us to change the movement pattern and was still efficient during the nonstationary epochs. This method holds promise for the development of new robot-assisted rehabilitation protocols because it does not require prespecifying a reference trajectory and does not require complex signal sensing or single-user calibration: the only signal that is measured is the position of the augmented joint. In this paper, we further demonstrate that this assistance was very intuitive for the participants who adapted almost instantaneously.",
"title": ""
},
{
"docid": "15e7feebdbcafc58aca3abdf9a8c093a",
"text": "Aqueous solutions of lead salts (1, 2) and saturated solutions of lead hydroxide (1) have been used as stains to enhance the electron-scattering properties of components of biological materials examined in the electron microscope. Saturated solutions of lead hydroxide (1), while staining more intensely than either lead acetate or monobasic lead acetate (l , 2), form insoluble lead carbonate upon exposure to air. The avoidance of such precipitates which contaminate surfaces of sections during staining has been the stimulus for the development of elaborate procedures for exclusion of air or carbon dioxide (3, 4). Several modifications of Watson's lead hydroxide stain (1) have recently appeared (5-7). All utilize relatively high pH (approximately 12) and one contains small amounts of tartrate (6), a relatively weak complexing agent (8), in addition to lead. These modified lead stains are less liable to contaminate the surface of the section with precipitated stain products. The stain reported here differs from previous alkaline lead stains in that the chelating agent, citrate, is in sufficient excess to sequester all lead present. Lead citrate, soluble in high concentrations in basic solutions, is a chelate compound with an apparent association constant (log Ka) between ligand and lead ion of 6.5 (9). Tissue binding sites, presumably organophosphates, and other anionic species present in biological components following fixation, dehydration, and plastic embedding apparently have a greater affinity for this cation than lead citrate inasmuch as cellular and extracellular structures in the section sequester lead from the staining solution. Alkaline lead citrate solutions are less likely to contaminate sections, as no precipitates form when droplets of fresh staining solution are exposed to air for periods of up to 30 minutes. The resultant staining of the sections is of high intensity in sections of Aralditeor Epon-embedded material. Cytoplasmic membranes, ribosomes, glycogen, and nuclear material are stained (Figs. 1 to 3). STAIN SOLUTION: Lead citrate is prepared by",
"title": ""
},
{
"docid": "e921e70494cbde3cb8bcb469eca36897",
"text": "The plagioporine opecoelids Helicometra fasciata (Rudolphi, 1819) Odhner, 1902, and Macvicaria crassigula (Linton, 1910) Bartoli, Bray, and Gibson, 1989 have been reported from fishes in expansive geographic regions, disjointed from their type localities. New material of M. crassigula was collected from near its type locality as well as specimens resembling Helicometra fasciata sensu lato from three triglids in the Gulf of Mexico. Comparisons of the ribosomal DNA (rDNA) sequences, comprising the partial 18S rDNA, internal transcribed spacer region (= ITS1, 5.8S, and ITS2), and partial 28S rDNA gene, from M. crassigula and Helicometra fasciata sensu lato in the Gulf of Mexico were made with sequences deposited in GenBank from those species from the Mediterranean Sea. Results reveal that M. crassigula sensu stricto from the Gulf of Mexico is distinct from the two cryptic species of M. crassigula sensu lato from the Mediterranean Sea and Helicometra fasciata sensu lato in this study differs from H. fasciata sequences from the Mediterranean Sea, thus Helicometra manteri sp. nov. is described.",
"title": ""
},
{
"docid": "27b9350b8ea1032e727867d34c87f1c3",
"text": "A field study and an experimental study examined relationships among organizational variables and various responses of victims to perceived wrongdoing. Both studies showed that procedural justice climate moderates the effect of organizational variables on the victim's revenge, forgiveness, reconciliation, or avoidance behaviors. In Study 1, a field study, absolute hierarchical status enhanced forgiveness and reconciliation, but only when perceptions of procedural justice climate were high; relative hierarchical status increased revenge, but only when perceptions of procedural justice climate were low. In Study 2, a laboratory experiment, victims were less likely to endorse vengeance or avoidance depending on the type of wrongdoing, but only when perceptions of procedural justice climate were high.",
"title": ""
},
{
"docid": "1a5ddde73f38ab9b2563540c36c222c0",
"text": "This paper presents a self-adaptive autonomous online learning through a general type-2 fuzzy system (GT2 FS) for the motor imagery (MI) decoding of a brain-machine interface (BMI) and navigation of a bipedal humanoid robot in a real experiment, using electroencephalography (EEG) brain recordings only. GT2 FSs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) the maximum number of EEG channels is limited and fixed; 2) no possibility of performing repeated user training sessions; and 3) desirable use of unsupervised and low-complexity feature extraction methods. The novel online learning method presented in this paper consists of a self-adaptive GT2 FS that can autonomously self-adapt both its parameters and structure via creation, fusion, and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structure identification is based on an online GT2 Gath–Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models), which are learnt in a continous (trial-by-trial) non-iterative basis. The effectiveness of the proposed method is demonstrated in a detailed BMI experiment, in which 15 untrained users were able to accurately interface with a humanoid robot, in a single session, using signals from six EEG electrodes only.",
"title": ""
},
{
"docid": "8405f30ca5f4bd671b056e9ca1f4d8df",
"text": "The remarkable manipulative skill of the human hand is not the result of rapid sensorimotor processes, nor of fast or powerful effector mechanisms. Rather, the secret lies in the way manual tasks are organized and controlled by the nervous system. At the heart of this organization is prediction. Successful manipulation requires the ability both to predict the motor commands required to grasp, lift, and move objects and to predict the sensory events that arise as a consequence of these commands.",
"title": ""
},
{
"docid": "aa2ddbfc3bb1aa854d1c576927dc2d30",
"text": "B-scan ultrasound provides a non-invasive low-cost imaging solution to primary care diagnostics. The inherent speckle noise in the images produced by this technique introduces uncertainty in the representation of their textural characteristics. To cope with the uncertainty, we propose a novel fuzzy feature extraction method to encode local texture. The proposed method extends the Local Binary Pattern (LBP) approach by incorporating fuzzy logic in the representation of local patterns of texture in ultrasound images. Fuzzification allows a Fuzzy Local Binary Pattern (FLBP) to contribute to more than a single bin in the distribution of the LBP values used as a feature vector. The proposed FLBP approach was experimentally evaluated for supervised classification of nodular and normal samples from thyroid ultrasound images. The results validate its effectiveness over LBP and other common feature extraction methods.",
"title": ""
},
{
"docid": "6a2d7b29a0549e99cdd31dbd2a66fc0a",
"text": "We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.",
"title": ""
},
{
"docid": "fecacef7460517ddb4f1d8dc66a089ea",
"text": "Recognizing materials in real-world images is a challenging task. Real-world materials have rich surface texture, geometry, lighting conditions, and clutter, which combine to make the problem particularly difficult. In this paper, we introduce a new, large-scale, open dataset of materials in the wild, the Materials in Context Database (MINC), and combine this dataset with deep learning to achieve material recognition and segmentation of images in the wild. MINC is an order of magnitude larger than previous material databases, while being more diverse and well-sampled across its 23 categories. Using MINC, we train convolutional neural networks (CNNs) for two tasks: classifying materials from patches, and simultaneous material recognition and segmentation in full images. For patch-based classification on MINC we found that the best performing CNN architectures can achieve 85.2% mean class accuracy. We convert these trained CNN classifiers into an efficient fully convolutional framework combined with a fully connected conditional random field (CRF) to predict the material at every pixel in an image, achieving 73.1% mean class accuracy. Our experiments demonstrate that having a large, well-sampled dataset such as MINC is crucial for real-world material recognition and segmentation.",
"title": ""
},
{
"docid": "f56c5a623b29b88f42bf5d6913b2823e",
"text": "We describe a novel interface for composition of polygonal meshes based around two artist-oriented tools: Geometry Drag-and-Drop and Mesh Clone Brush. Our drag-and-drop interface allows a complex surface part to be selected and interactively dragged to a new location. We automatically fill the hole left behind and smoothly deform the part to conform to the target surface. The artist may increase the boundary rigidity of this deformation, in which case a fair transition surface is automatically computed. Our clone brush allows for transfer of surface details with precise spatial control. These tools support an interaction style that has not previously been demonstrated for 3D surfaces, allowing detailed 3D models to be quickly assembled from arbitrary input meshes. We evaluated this interface by distributing a basic tool to computer graphics hobbyists and professionals, and based on their feedback, describe potential workflows which could utilize our techniques.",
"title": ""
},
{
"docid": "81fc9abd3e2ad86feff7bd713cff5915",
"text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.",
"title": ""
},
{
"docid": "62d21ddba64df488fc82e9558f2afc99",
"text": "The spatial analysis of crime and the current focus on hotspots has pushed the area of crime mapping to the fore, especially in regard to high volume offences such as vehicle theft and burglary. Hotspots also have a temporal component, yet police recorded crime databases rarely record the actual time of offence as this is seldom known. Police crime data tends, more often than not, to reflect the routine activities of the victims rather than the offence patterns of the offenders. This paper demonstrates a technique that uses police START and END crime times to generate a crime occurrence probability at any given time that can be mapped or visualized graphically. A study in the eastern suburbs of Sydney, Australia, demonstrates that crime hotspots with a geographical proximity can have distinctly different temporal patterns.",
"title": ""
},
{
"docid": "8a072bb125569fa1a52c1e86dacc0500",
"text": "Accurate prediction of lake-level variations is important for planning, design, construction, and operation of lakeshore structures and also in the management of freshwater lakes for water supply purposes. In the present paper, three artificial intelligence approaches, namely artificial neural networks (ANNs), adaptive-neuro-fuzzy inference system (ANFIS), and gene expression programming (GEP), were applied to forecast daily lake-level variations up to 3-day ahead time intervals. The measurements at the Lake Iznik in Western Turkey, for the period of January 1961–December 1982, were used for training, testing, and validating the employed models. The results obtained by the GEP approach indicated that it performs better than ANFIS and ANNs in predicting lake-level variations. A comparison was also made between these artificial intelligence approaches and convenient autoregressive moving average (ARMA) models, which demonstrated the superiority of GEP, ANFIS, and ANN models over ARMA models. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "741e0f73b414b5eef1ce44bbfdb33646",
"text": "Organizing Web services into functionally similar clusters, is an efficient approach to discovering Web services efficiently. An important aspect of the clustering process is calculating the semantic similarity of Web services. Most current clustering approaches are based on similarity-distance measurement, including keyword, ontology and information-retrieval-based methods. Problems with these approaches include a shortage of high quality ontologies and a loss of semantic information. In addition, there has been little fine-grained improvement in existing approaches to service clustering. In this paper, we present a new approach to grouping Web services into functionally similar clusters by mining Web service documents and generating an ontology via hidden semantic patterns present within the complex terms used in service features to measure similarity. If calculating the similarity using the generated ontology fails, the similarity is calculated by using an information-retrieval-based term-similarity method that adopts term-similarity measuring techniques used by thesaurus and search engines. Another important aspect of high performance in clustering is identifying the most suitable cluster center. To improve the utility of clusters, we propose an approach to identifying the cluster center that combines service similarity with the term frequency-inverse document frequency values of service names. Experimental results show that our clustering approach performs better than existing approaches.",
"title": ""
}
] |
scidocsrr
|
ae0e30466ab3c718e5e682eeffa6a9da
|
Practical memory checking with Dr. Memory
|
[
{
"docid": "a9d22e2568bcae7a98af7811546c7853",
"text": "This thesis addresses the challenges of building a software system for general-purpose runtime code manipulation. Modern applications, with dynamically-loaded modules and dynamicallygenerated code, are assembled at runtime. While it was once feasible at compile time to observe and manipulate every instruction — which is critical for program analysis, instrumentation, trace gathering, optimization, and similar tools — it can now only be done at runtime. Existing runtime tools are successful at inserting instrumentation calls, but no general framework has been developed for fine-grained and comprehensive code observation and modification without high overheads. This thesis demonstrates the feasibility of building such a system in software. We present DynamoRIO, a fully-implemented runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO uses code caching technology to provide efficient, transparent, and comprehensive manipulation of an unmodified application running on a stock operating system and commodity hardware. DynamoRIO executes large, complex, modern applications with dynamically-loaded, generated, or even modified code. Despite the formidable obstacles inherent in the IA-32 architecture, DynamoRIO provides these capabilities efficiently, with zero to thirty percent time and memory overhead on both Windows and Linux. DynamoRIO exports an interface for building custom runtime code manipulation tools of all types. It has been used by many researchers, with several hundred downloads of our public release, and is being commercialized in a product for protection against remote security exploits, one of numerous applications of runtime code manipulation. Thesis Supervisor: Saman Amarasinghe Title: Associate Professor of Electrical Engineering and Computer Science",
"title": ""
}
] |
[
{
"docid": "74a38306c18b0a0ec6e02e5446ff7ed1",
"text": "In this work we scrutinize a low level computer vision task - non-maximum suppression (NMS) - which is a crucial preprocessing step in many computer vision applications. Especially in real time scenarios, efficient algorithms for such preprocessing algorithms, which operate on the full image resolution, are important. In the case of NMS, it seems that merely the straightforward implementation or slight improvements are known. We show that these are far from being optimal, and derive several algorithms ranging from easy-to-implement to highly-efficient",
"title": ""
},
{
"docid": "476d6ba19c68e10cad80874b8f0e99a2",
"text": "Synthetic aperture radar (SAR) is a general method for generating high-resolution radar maps from low-resolution aperture data which is based on using the relative motion between the radar antenna and the imaged scene. Originally conceived in the early 1950s [1], it is extensively used to image objects on the surface of the Earth and the planets [2]. A synthetic aperture is formed using electromagnetic signals from a physical aperture located at different space-time positions. The synthetic aperture may therefore observe the scene over a large angular sector by moving the physical aperture. Hence, the technique can give a significant improvement in resolution, in principle limited only by the stability of the wave field and other restrictions imposed on the movement of the physical aperture. A physical aperture, on the other hand, provides angular resolution inversely proportional to aperture size such that the spatial resolution degrades with increasing distance to the scene. SAR images of the ground are often generated from pulse echo data acquired by an antenna moving along a nominally linear track. It is well known that the spatial resolution can be made independent of distance to the ground since the antenna can be moved along correspondingly longer tracks [2]. It is therefore possible to produce radar maps with meteror decimeter-resolution from aircraft or spacecraft at very large distances. The resolution in these systems are limited by antenna illumination and system bandwidth but also by other factors, e.g. accuracy of antenna positioning, propagation perturbations, transmitter power, receiver sensitivity, clock stability, and dynamic range. The ultimate limit of SAR spatial resolution is proportional to the wavelength. The finest resolution is determined by the classical uncertainty principle applied to a band-limited wave packet. The area of a resolution cell can be shown to be related to radar system bandwidth B (= fmax fmin, where fmax and fmin are the maximum and minimum electromagnetic frequency, respectively) and aperture angle #2 #1 (the angle over which the antenna is moved and radiating as seen from the imaged ground) according to [3] ¢ASAR = ̧c 2(#2 #1) c",
"title": ""
},
{
"docid": "79aa4b2c2215a677b92429d6c90410d0",
"text": "Intruders computers, who are spread across the Internet have become a major threat in our world, The researchers proposed a number of techniques such as (firewall, encryption) to prevent such penetration and protect the infrastructure of computers, but with this, the intruders managed to penetrate the computers. IDS has taken much of the attention of researchers, IDS monitors the resources computer and sends reports on the activities of any anomaly or strange patterns The aim of this paper is to explain the stages of the evolution of the idea of IDS and its importance to researchers and research centres, security, military and to examine the importance of intrusion detection systems and categories , classifications, and where can put IDS to reduce the risk to the network.",
"title": ""
},
{
"docid": "10a3bb2de2abc34c07a975bf6da5e266",
"text": "Main-tie-main (MTM) transfer schemes increase reliability in a power system by switching a load bus to a secondary power source when a power interruption occurs on the primary source. Traditionally, the large number of physical I/O lines required makes main-tie-main schemes expensive to design and implement. Using Ethernet-based IEC 61850, these hardwired I/O lines can be removed and replaced with generic object-oriented substation event (GOOSE) messages. Adjusting the scheme for optimal performance is done via software which saves redesign time and rewiring time. Special attention is paid to change-of-state GOOSE only; no analog GOOSE messages are used, making the scheme fast and easy to configure, maintain, and troubleshoot. Applications such as fast motor-bus transfer are discussed with synchronization remaining at each breaker relay. Simulation test results recorded GOOSE message latencies on a system configured for a main-tie-main scheme. This paper presents details of open and closed, manual and automatic transfers.",
"title": ""
},
{
"docid": "38a0c9b833bd907065b549cc28d28dd4",
"text": "Increased adoption of mobile devices introduces a new spin to Internet: mobile apps are becoming a key source of user traffic. Surprisingly, service providers and enterprises are largely unprepared for this change as they increasingly lose understanding of their traffic and fail to persistently identify individual apps. App traffic simply appears no different than any other HTTP data exchange. This raises a number of concerns for security and network management. In this paper, we propose AppPrint, a system that learns fingerprints of mobile apps via comprehensive traffic observations. We show that these fingerprints identify apps even in small traffic samples where app identity cannot be explicitly revealed in any individual traffic flows. This unique AppPrint feature is crucial because explicit app identifiers are extremely scarce, leading to a very limited characterization coverage of the existing approaches. In fact, our experiments on a nationwide dataset from a major cellular provider show that AppPrint significantly outperforms any existing app identification. Moreover, the proposed system is robust to the lack of key app-identification sources, i.e., the traffic related to ads and analytic services commonly leveraged by the state-of-the-art identification methods.",
"title": ""
},
{
"docid": "4373b838d10ac77127c3a7021fe4534c",
"text": "Fine-grained recognition concerns categorization at sub-ordinate levels, where the distinction between object classes is highly local. Compared to basic level recognition, fine-grained categorization can be more challenging as there are in general less data and fewer discriminative features. This necessitates the use of stronger prior for feature selection. In this work, we include humans in the loop to help computers select discriminative features. We introduce a novel online game called \"Bubbles\" that reveals discriminative features humans use. The player's goal is to identify the category of a heavily blurred image. During the game, the player can choose to reveal full details of circular regions (\"bubbles\"), with a certain penalty. With proper setup the game generates discriminative bubbles with assured quality. We next propose the \"Bubble Bank\" algorithm that uses the human selected bubbles to improve machine recognition performance. Experiments demonstrate that our approach yields large improvements over the previous state of the art on challenging benchmarks.",
"title": ""
},
{
"docid": "e696739abd79af4281cac1c3a192c268",
"text": "In this paper, we consider principles of design and basic fundamentals of a GPU simulator of the multilayer neural network with multi-valued neurons (MLMVN). Slowing down a learning process due to a big learning dataset and/or a big neural network needed for solving a certain problem is a potential bottleneck preventing the use of neural networks for solving some challenging problems. The same is related to deep learning. MLMVN is a feedforward complex-valued neural network, which has a number of advantages when compared to real-valued neural networks. These advantages include derivative-free learning and significantly better generalization capability. To extend applicability of MLMVN, its GPU-based software implementation shall be considered. We present basic principles of the GPU simulator of MLMVN and how matrix algebra operations are specifically employed there. It is shown that the bigger the network is, the more beneficial is its GPU implementation. It is shown that up to 32× acceleration can be achieved for the MLMVN learning process. Some applications, which could not be even considered without a GPU simulator, are also presented.",
"title": ""
},
{
"docid": "aa2df951eba502ec71eda401755d25a7",
"text": "A uni-planar backward directional couplers is analyzed and designed. Microstrip parallel coupled lines and asymmetrical delay lines make multi-sectioned coupler which enhances the directivity. In this paper, 20 dB multi-sectioned couplers with single and double delay lines are designed and measured to show the validation of analysis. The coupler with two delay lines has the directivity over 30 dB and the fractional bandwidth of 30% at the center frequency of 1.8 GHz.",
"title": ""
},
{
"docid": "c43f26b8f58bb93b6dbb1034a77163ec",
"text": "Protecting software copyright has been an issue since the late 1970’s, and software license validation has been a primary method employed in an attempt to minimise software piracy and protect software copyright. This paper presents a novel method for decentralised peer-topeer software license validation using cryptocurrency blockchain technology to ameliorate software piracy, and to provide a mechanism for all software developers to protect their copyrighted works.",
"title": ""
},
{
"docid": "d6564e6ab6b770792f7563377478fb18",
"text": "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.",
"title": ""
},
{
"docid": "5c9ba6384b6983a26212e8161e502484",
"text": "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples – ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.",
"title": ""
},
{
"docid": "597a3b52fd5114228d74398756d3359f",
"text": "The authors report a meta-analysis of individual differences in detecting deception, confining attention to occasions when people judge strangers' veracity in real-time with no special aids. The authors have developed a statistical technique to correct nominal individual differences for differences introduced by random measurement error. Although researchers have suggested that people differ in the ability to detect lies, psychometric analyses of 247 samples reveal that these ability differences are minute. In terms of the percentage of lies detected, measurement-corrected standard deviations in judge ability are less than 1%. In accuracy, judges range no more widely than would be expected by chance, and the best judges are no more accurate than a stochastic mechanism would produce. When judging deception, people differ less in ability than in the inclination to regard others' statements as truthful. People also differ from one another as lie- and truth-tellers. They vary in the detectability of their lies. Moreover, some people are more credible than others whether lying or truth-telling. Results reveal that the outcome of a deception judgment depends more on the liar's credibility than any other individual difference.",
"title": ""
},
{
"docid": "b20aa52ea2e49624730f6481a99a8af8",
"text": "A 51.3-MHz 18-<inline-formula><tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> 21.8-ppm/°C relaxation oscillator is presented in 90-nm CMOS. The proposed oscillator employs an integrated error feedback and composite resistors to minimize its sensitivity to temperature variations. For a temperature range from −20 °C to 100 °C, the fabricated circuit demonstrates a frequency variation less than ±0.13%, leading to an average frequency drift of 21.8 ppm/°C. As the supply voltage changes from 0.8 to 1.2 V, the frequency variation is ±0.53%. The measured rms jitter and phase noise at 1-MHz offset are 89.27 ps and −83.29 dBc/Hz, respectively.",
"title": ""
},
{
"docid": "279d9681a72f345bbf9ec852e3cb52f1",
"text": "OBJECTIVE\nOne of the most important considerations in the care of thoracic surgery patients is the control of pain, which leads to increased morbidity and relevant mortality.\n\n\nMETHODS\nBetween February and May 2009, 60 patients undergoing full muscle-sparing posterior minithoracotomy were prospectively randomized into two groups, according to the thoracotomy closure techniques. In the first group (group A), two holes were drilled into the sixth rib using a hand perforator, and sutures were passed through the holes in the sixth rib and were circled from the upper edge of the fifth rib, thereby compressing the intercostal nerve underneath the fifth rib. In the second group (group B), the intercostal muscle underneath the fifth rib was partially dissected along with the intercostal nerve, corresponding to the holes on the sixth rib. Two 1/0 polyglactin (Vicyrl) sutures were passed through the holes in the sixth rib and above the intercostal nerve.\n\n\nRESULTS\nThere were 30 patients in each group. The visual analog score, observer verbal ranking scale (OVRS) scores for pain, and Ramsay sedation scores were used to follow-up on postoperative analgesia and sedation. The von Frey hair test was used to evaluate hyperalgesia of the patients. The patients in group B had lower visual analog scores at rest and during coughing. The patients in group B had lower OVRS scores than group A patients. The groups were not statistically different in terms of the Ramsay sedation scores and von Frey hair tests.\n\n\nCONCLUSIONS\nThoracotomy closure by a technique that avoids intercostal nerve compression significantly decreases post-thoracotomy pain.",
"title": ""
},
{
"docid": "068be5b13515937ed76592bf8a9782ce",
"text": "We outline the core components of a modulation recognition system that uses hierarchical deep neural networks to identify data type, modulation class and modulation order. Our system utilizes a flexible front-end detector that performs energy detection, channelization and multi-band reconstruction on wideband data to provide raw narrowband signal snapshots. We automatically extract features from these snapshots using convolutional neural network layers, which produce decision class estimates. Initial experimentation on a small synthetic radio frequency dataset indicates the viability of deep neural networks applied to the communications domain. We plan to demonstrate this system at the Battle of the Mod Recs Workshop at IEEE DySpan 2017.",
"title": ""
},
{
"docid": "080e7880623a09494652fd578802c156",
"text": "Whole-cell biosensors are a good alternative to enzyme-based biosensors since they offer the benefits of low cost and improved stability. In recent years, live cells have been employed as biosensors for a wide range of targets. In this review, we will focus on the use of microorganisms that are genetically modified with the desirable outputs in order to improve the biosensor performance. Different methodologies based on genetic/protein engineering and synthetic biology to construct microorganisms with the required signal outputs, sensitivity, and selectivity will be discussed.",
"title": ""
},
{
"docid": "59754857209f45ab7c3708fa413808a3",
"text": "Recent studies on the hippocampus and the prefrontal cortex have considerably advanced our understanding of the distinct roles of these brain areas in the encoding and retrieval of memories, and of how they interact in the prolonged process by which new memories are consolidated into our permanent storehouse of knowledge. These studies have led to a new model of how the hippocampus forms and replays memories and how the prefrontal cortex engages representations of the meaningful contexts in which related memories occur, as well as how these areas interact during memory retrieval. Furthermore, they have provided new insights into how interactions between the hippocampus and prefrontal cortex support the assimilation of new memories into pre-existing networks of knowledge, called schemas, and how schemas are modified in this process as the foundation of memory consolidation.",
"title": ""
},
{
"docid": "68bd70bc546e983f5fa71e17bdde3e00",
"text": "Hand–eye calibration is a classic problem in robotics that aims to find the transformation between two rigidly attached reference frames, usually a camera and a robot end-effector or a motion tracker. Most hand–eye calibration techniques require two data streams, one containing the eye (camera) motion and the other containing the hand (robot/tracker) motion, and the classic hand–eye formulation assumes that both data streams are fully synchronized. However, different motion capturing devices and cameras often have variable capture rates and timestamps that cannot always be easily triggered in sync. Although probabilistic approaches have been proposed to solve for nonsynchronized data streams, they are not able to deal with different capture rates. We propose a new approach for unsynchronized hand–eye calibration that is able to deal with different capture rates and time delays. Our method interpolates and resamples the signal with the lowest capture rate in a way that is consistent with the twist motion constraints of the hand–eye problem. Cross-correlation techniques are then used to produce two fully synchronized data streams that can be used to solve the hand–eye problem with classic methods. In our experimental section, we show promising validation results on simulation data and also on real data obtained from a robotic arm holding a camera.",
"title": ""
},
{
"docid": "653bdddafdb40af00d5d838b1a395351",
"text": "Advances in electronic location technology and the coming of age of mobile computing have opened the door for location-aware applications to permeate all aspects of everyday life. Location is at the core of a large number of high-value applications ranging from the life-and-death context of emergency response to serendipitous social meet-ups. For example, the market for GPS products and services alone is expected to grow to US$200 billion by 2015. Unfortunately, there is no single location technology that is good for every situation and exhibits high accuracy, low cost, and universal coverage. In fact, high accuracy and good coverage seldom coexist, and when they do, it comes at an extreme cost. Instead, the modern localization landscape is a kaleidoscope of location systems based on a multitude of different technologies including satellite, mobile telephony, 802.11, ultrasound, and infrared among others. This lecture introduces researchers and developers to the most popular technologies and systems for location estimation and the challenges and opportunities that accompany their use. For each technology, we discuss the history of its development, the various systems that are based on it, and their trade-offs and their effects on cost and performance. We also describe technology-independent algorithms that are commonly used to smooth streams of location estimates and improve the accuracy of object tracking. Finally, we provide an overview of the wide variety of application domains where location plays a key role, and discuss opportunities and new technologies on the horizon. KEyWoRDS localization, location systems, location tracking, context awareness, navigation, location sensing, tracking, Global Positioning System, GPS, infrared location, ultrasonic location, 802.11 location, cellular location, Bayesian filters, RFID, RSSI, triangulation",
"title": ""
},
{
"docid": "cf5e96ba465e9452973223181e45e9fe",
"text": "Neural Machine Translation (NMT) typically leverages monolingual data in training through backtranslation. We investigate an alternative simple method to use monolingual data for NMT training: We combine the scores of a pre-trained and fixed language model (LM) with the scores of a translation model (TM) while the TM is trained from scratch. To achieve that, we train the translation model to predict the residual probability of the training data added to the prediction of the LM. This enables the TM to focus its capacity on modeling the source sentence since it can rely on the LM for fluency. We show that our method outperforms previous approaches to integrate LMs into NMT while the architecture is simpler as it does not require gating networks to balance TM and LM. We observe gains of between +0.24 and +2.36 BLEU on all four test sets (English-Turkish, TurkishEnglish, Estonian-English, Xhosa-English) on top of ensembles without LM. We compare our method with alternative ways to utilize monolingual data such as backtranslation, shallow fusion, and cold fusion.",
"title": ""
}
] |
scidocsrr
|
95655a31cbbdd856d63633f08e4f8d7f
|
Implementation of Enhanced Web Crawler for Deep-Web Interfaces
|
[
{
"docid": "0ae0e78ac068d8bc27d575d90293c27b",
"text": "Deep web refers to the hidden part of the Web that remains unavailable for standard Web crawlers. To obtain content of Deep Web is challenging and has been acknowledged as a significant gap in the coverage of search engines. To this end, the paper proposes a novel deep web crawling framework based on reinforcement learning, in which the crawler is regarded as an agent and deep web database as the environment. The agent perceives its current state and selects an action (query) to submit to the environment according to Q-value. The framework not only enables crawlers to learn a promising crawling strategy from its own experience, but also allows for utilizing diverse features of query keywords. Experimental results show that the method outperforms the state of art methods in terms of crawling capability and breaks through the assumption of full-text search implied by existing methods.",
"title": ""
},
{
"docid": "d578c75d20e6747d0a381aee3a2c8f78",
"text": "As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. We propose a two-stage framework, namely SmartCrawler, for efficient harvesting deep web interfaces. In the first stage, SmartCrawler performs site-based searching for center pages with the help of search engines, avoiding visiting a large number of pages. To achieve more accurate results for a focused crawl, SmartCrawler ranks websites to prioritize highly relevant ones for a given topic. In the second stage, SmartCrawler achieves fast in-site searching by excavating most relevant links with an adaptive link-ranking. To eliminate bias on visiting some highly relevant links in hidden web directories, we design a link tree data structure to achieve wider coverage for a website. Our experimental results on a set of representative domains show the agility and accuracy of our proposed crawler framework, which efficiently retrieves deep-web interfaces from large-scale sites and achieves higher harvest rates than other crawlers.",
"title": ""
}
] |
[
{
"docid": "3180f7bd813bcd64065780bc9448dc12",
"text": "This paper reports on email classification and filtering, more specifically on spam versus ham and phishing versus spam classification, based on content features. We test the validity of several novel statistical feature extraction methods. The methods rely on dimensionality reduction in order to retain the most informative and discriminative features. We successfully test our methods under two schemas. The first one is a classic classification scenario using a 10-fold cross-validation technique for several corpora, including four ground truth standard corpora: Ling-Spam, SpamAssassin, PU1, and a subset of the TREC 2007 spam corpus, and one proprietary corpus. In the second schema, we test the anticipatory properties of our extracted features and classification models with two proprietary datasets, formed by phishing and spam emails sorted by date, and with the public TREC 2007 spam corpus. The contributions of our work are an exhaustive comparison of several feature selection and extraction methods in the frame of email classification on different benchmarking corpora, and the evidence that especially the technique of biased discriminant analysis offers better discriminative features for the classification, gives stable classification results notwithstanding the amount of features chosen, and robustly retains their discriminative value over time and data setups. These findings are especially useful in a commercial setting, where short profile rules are built based on a limited number of features for filtering emails.",
"title": ""
},
{
"docid": "b42bc73311ca98129bfb4e436d3f8553",
"text": "Diabetic retinopathy has been an important cause of blindness in young and middle age adults in the United States. Epidemiologic studies have quantitated the risk and have described potentially causal factors associated with many ocular complications of diabetes and other facets of this disease. A review of recent advances in diagnosis, treatment, temporal trends, and health care for diabetic retinopathy was conducted. Since the early 1980's, there have been studies of the variability of diabetic retinopathy in populations around the world and subpopulations in the United States which have demonstrated the high prevalences and incidences of this condition. Observational studies and clinical trials have documented the importance of glycemic and blood pressure control in the development and progression of this disease. There are some differences in the importance of confounders in different populations. Epidemiologic data have helped understand the importance of health care and health education in prevention and treatment of this condition. Observational studies have documented the importance of this disease on quality of life. Although there have been advances in understanding the distribution, causes, and severity of diabetic retinopathy, this is ever changing and requires continued monitoring. This is important because the increasing burden of diabetes will place a greater burden on the population and the medical care systems that will be caring for them.",
"title": ""
},
{
"docid": "e1fb515f0f5bbec346098f1ee2aaefdc",
"text": "Observing failures and other – desired or undesired – behavior patterns in large scale software systems of specific domains (telecommunication systems, information systems, online web applications, etc.) is difficult. Very often, it is only possible by examining the runtime behavior of these systems through operational logs or traces. However, these systems can generate data in order of gigabytes every day, which makes a challenge to process in the course of predicting upcoming critical problems or identifying relevant behavior patterns. We can say that there is a gap between the amount of information we have and the amount of information we need to make a decision. Low level data has to be processed, correlated and synthesized in order to create high level, decision helping data. The actual value of this high level data lays in its availability at the time of decision making (e.g., do we face a virus attack?). In other words high level data has to be available real-time or near real-time. The research area of event processing deals with processing such data that are viewed as events and with making alerts to the administrators (users) of the systems about relevant behavior patterns based on the rules that are determined in advance. The rules or patterns describe the typical circumstances of the events which have been experienced by the administrators. Normally, these experts improve their observation capabilities over time as they experience more and more critical events and the circumstances preceding them. However, there is a way to aid this manual process by applying the results from a related (and from many aspects, overlapping) research area, predictive analytics, and thus improving the effectiveness of event processing. Predictive analytics deals with the prediction of future events based on previously observed historical data by applying sophisticated methods like machine learning, the historical data is often collected and transformed by using techniques similar to the ones of event processing, e.g., filtering, correlating the data, and so on. In this paper, we are going to examine both research areas and offer a survey on terminology, research achievements, existing solutions, and open issues. We discuss the applicability of the research areas to the telecommunication domain. We primarily base our survey on articles published in international conferences and journals, but we consider other sources of information as well, like technical reports, tools or web-logs.",
"title": ""
},
{
"docid": "560993e2d417baaba96a09cc0bf04515",
"text": "Convolutional Neural Networks (CNNs) are extremely efficient, since they exploit the inherent translation-invariance of natural images. However, translation is just one of a myriad of useful spatial transformations. Can the same efficiency be attained when considering other spatial invariances? Such generalized convolutions have been considered in the past, but at a high computational cost. We present a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy. It consists of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes. With a carefully crafted warp, the resulting architecture can be made invariant to one of a wide range of spatial transformations. We show encouraging results in realistic scenarios, including the estimation of vehicle poses in the Google Earth dataset (rotation and scale), and face poses in Annotated Facial Landmarks in the Wild (3D rotations under perspective).",
"title": ""
},
{
"docid": "88839cba2b2c91f7e18d84c9d4ecd976",
"text": "Keyphrase boundary classification (KBC) is the task of detecting keyphrases in scientific articles and labelling them with respect to predefined types. Although important in practice, this task is so far underexplored, partly due to the lack of labelled data. To overcome this, we explore several auxiliary tasks, including semantic super-sense tagging and identification of multi-word expressions, and cast the task as a multi-task learning problem with deep recurrent neural networks. Our multi-task models perform significantly better than previous state of the art approaches on two scientific KBC datasets, particularly for long keyphrases.",
"title": ""
},
{
"docid": "e6d8e8d04585c60a55ebb8229f06e996",
"text": "Cellphones provide a unique opportunity to examine how new media both reflect and affect the social world. This study suggests that people map their understanding of common social rules and dilemmas onto new technologies. Over time, these interactions create and reflect a new social landscape. Based upon a year-long observational field study and in-depth interviews, this article examines cellphone usage from two main perspectives: how social norms of interaction in public spaces change and remain the same; and how cellphones become markers for social relations and reflect tacit pre-existing power relations. Informed by Goffman’s concept of cross talk and Hopper’s caller hegemony, the article analyzes the modifications, innovations and violations of cellphone usage on tacit codes of social interactions.",
"title": ""
},
{
"docid": "8b7aab188ac4b6e4e777dfd1c670fab3",
"text": "In this paper, we have designed a newly shaped narrowband microstrip antenna operating at nearly 2.45 GHz based on transmission-line model. We have created a reversed `Arrow' shaped slot at the edge of opposite side of microstrip line feed to improve return loss and minimize VSWR, which are required for better impedance matching. After simulating the design, we have got higher return loss (approximately -41 dB) and lower VSWR (approximately 1.02:1) at 2.442 GHz. The radiation pattern of the antenna is unidirectional, which is suitable for both fixed RFID tag and reader. The gain of this antenna is 9.67 dB. The design has been simulated in CST Microwave Studio 2011.",
"title": ""
},
{
"docid": "0b631a4139efb14c1fe43876b29cf1c6",
"text": "In recent years, remote sensing image data have increased significantly due to the improvement of remote sensing technique. On the other hand, data acquisition rate will also be accelerated by increasing satellite sensors. Hence, it is a large challenge to make full use of so considerable data by conventional retrieval approach. The lack of semantic based retrieval capability has impeded application of remote sensing data. To address the issue, we propose a framework based on domain-dependent ontology to perform semantic retrieval in image archives. Firstly, primitive features expressed by color and texture are extracted to gain homogeneous region by means of our unsupervised algorithm. The homogeneous regions are described by high-level concepts depicted and organized by domain specific ontology. Interactive learning technique is employed to associate regions and high-level concepts. These associations are used to perform querying task. Additionally, a reasoning mechanism over ontology integrating an inference engine is discussed. It enables the capability of semantic query in archives by mining the interrelationships among domain concepts and their properties to satisfy users’ requirements. In our framework, ontology is used to provide a sharable and reusable concept set as infrastructure for high level extension such as reasoning. Finally, preliminary results are present and future work is also discussed. KeywordsImage retrieval; Ontology; Semantic reasoning;",
"title": ""
},
{
"docid": "a50b7ab02d2fe934f5fb5bed14fcdad9",
"text": "An empirical study has been conducted investigating the relationship between the performance of an aspect based language model in terms of perplexity and the corresponding information retrieval performance obtained. It is observed, on the corpora considered, that the perplexity of the language model has a systematic relationship with the achievable precision recall performance though it is not statistically significant.",
"title": ""
},
{
"docid": "efcfb0aac56068374d861f24775c9cce",
"text": "Hekaton is a new database engine optimized for memory resident data and OLTP workloads. Hekaton is fully integrated into SQL Server; it is not a separate system. To take advantage of Hekaton, a user simply declares a table memory optimized. Hekaton tables are fully transactional and durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both Hekaton tables and regular tables and a transaction can update data in both types of tables. T-SQL stored procedures that reference only Hekaton tables can be compiled into machine code for further performance improvements. The engine is designed for high con-currency. To achieve this it uses only latch-free data structures and a new optimistic, multiversion concurrency control technique. This paper gives an overview of the design of the Hekaton engine and reports some experimental results.",
"title": ""
},
{
"docid": "db93b1e7b56f0d37c69fce9094b72bc3",
"text": "The Man-In-The-Middle (MITM) attack is one of the most well known attacks in computer security, representing one of the biggest concerns for security professionals. MITM targets the actual data that flows between endpoints, and the confidentiality and integrity of the data itself. In this paper, we extensively review the literature on MITM to analyse and categorize the scope of MITM attacks, considering both a reference model, such as the open systems interconnection (OSI) model, as well as two specific widely used network technologies, i.e., GSM and UMTS. In particular, we classify MITM attacks based on several parameters, like location of an attacker in the network, nature of a communication channel, and impersonation techniques. Based on an impersonation techniques classification, we then provide execution steps for each MITM class. We survey existing countermeasures and discuss the comparison among them. Finally, based on our analysis, we propose a categorisation of MITM prevention mechanisms, and we identify some possible directions for future research.",
"title": ""
},
{
"docid": "88d00a5be341f523ecc2898e7dea26f3",
"text": "Spoken dialog systems help users achieve a task using natural language. Noisy speech recognition and ambiguity in natural language motivate statistical approaches that model distributions over the user’s goal at every step in the dialog. The task of tracking these distributions, termed Dialog State Tracking, is therefore an essential component of any spoken dialog system. In recent years, the Dialog State Tracking Challenges have provided a common testbed and evaluation framework for this task, as well as labeled dialog data. As a result, a variety of machine-learned methods have been successfully applied to Dialog State Tracking. This paper reviews the machine-learning techniques that have been adapted to Dialog State Tracking, and gives an overview of published evaluations. Discriminative machine-learned methods outperform generative and rule-based methods, the previous state-of-the-art.",
"title": ""
},
{
"docid": "c194a38da7d9c1615f6fa2643a7aa66e",
"text": "Go gaming is a struggle for territory control between rival, black and white, stones on a board. We model the Go dynamics in a game by means of the Ising model whose interaction coefficients reflect essential rules and tactics employed in Go to build long-term strategies. At any step of the game, the energy functional of the model provides the control degree (strength) of a player over the board. A close fit between predictions of the model with actual games is obtained.",
"title": ""
},
{
"docid": "11b0994822e2e1a0a29c799e48b8ed39",
"text": "Beyond the advantages in design of running gear and railway vehicles itself and introducing the active wheelset steering control systems many railroads and tram companies still use huge number of old fashion vehicles. Dynamic performance, safety and maintenance cost of which strongly depend on the wheelset dynamics and particularly on how good is design of wheel and rail profiles. The paper presents a procedure for design of a wheel profile based on geometrical wheel/rail (w/r) contact characteristics which uses numerical optimization technique. The procedure has been developed by Railway Engineering Group in Delft University of Technology. The optimality criteria formulated using the requirements to railway track and wheelset, are related to stability of wheelset, cost efficiency of design and minimum wear of wheels and rails. The shape of a wheel profile has been varied during optimization. A new wheel profile is obtained for given target rolling radii difference function ‘ r y ∆ − ’ and rail profile. Measurements of new and worn wheel and rail profiles has been used to define the target ‘ r y ∆ − ’ curve. Finally dynamic simulations of vehicle with obtained wheel profile have been performed in ADAMS/Rail program package in order to control w/r wear and safety requirements. The proposed procedure has been applied to design of wheel profile for trams. Numerical results are presented and discussed.",
"title": ""
},
{
"docid": "71022e2197bfb99bd081928cf162f58a",
"text": "Ophthalmology and visual health research have received relatively limited attention from the personalized medicine community, but this trend is rapidly changing. Postgenomics technologies such as proteomics are being utilized to establish a baseline biological variation map of the human eye and related tissues. In this context, the choroid is the vascular layer situated between the outer sclera and the inner retina. The choroidal circulation serves the photoreceptors and retinal pigment epithelium (RPE). The RPE is a layer of cuboidal epithelial cells adjacent to the neurosensory retina and maintains the outer limit of the blood-retina barrier. Abnormal changes in choroid-RPE layers have been associated with age-related macular degeneration. We report here the proteome of the healthy human choroid-RPE complex, using reverse phase liquid chromatography and mass spectrometry-based proteomics. A total of 5309 nonredundant proteins were identified. Functional analysis of the identified proteins further pointed to molecular targets related to protein metabolism, regulation of nucleic acid metabolism, transport, cell growth, and/or maintenance and immune response. The top canonical pathways in which the choroid proteins participated were integrin signaling, mitochondrial dysfunction, regulation of eIF4 and p70S6K signaling, and clathrin-mediated endocytosis signaling. This study illustrates the largest number of proteins identified in human choroid-RPE complex to date and might serve as a valuable resource for future investigations and biomarker discovery in support of postgenomics ophthalmology and precision medicine.",
"title": ""
},
{
"docid": "37b5ab95b1b488c5aee9a5cfed87c095",
"text": "A key step in the understanding of printed documents is their classification based on the nature of information they contain and their layout. In this work we consider a dynamic scenario in which document classes are not known a priori and new classes can appear at any time. This open world setting is both realistic and highly challenging. We use an SVM-based classifier based only on image-level features and use a nearest-neighbor approach for detecting new classes. We assess our proposal on a real-world dataset composed of 562 invoices belonging to 68 different classes. These documents were digitalized after being handled by a corporate environment, thus they are quite noisy---e.g., big stamps and handwritten signatures at unfortunate positions and alike. The experimental results are highly promising.",
"title": ""
},
{
"docid": "34118709a36ba09a822202753cbff535",
"text": "Our healthcare sector daily collects a huge data including clinical examination, vital parameters, investigation reports, treatment follow-up and drug decisions etc. But very unfortunately it is not analyzed and mined in an appropriate way. The Health care industry collects the huge amounts of health care data which unfortunately are not “mined” to discover hidden information for effective decision making for health care practitioners. Data mining refers to using a variety of techniques to identify suggest of information or decision making knowledge in database and extracting these in a way that they can put to use in areas such as decision support , Clustering ,Classification and Prediction. This paper has developed a Computer-Based Clinical Decision Support System for Prediction of Heart Diseases (CCDSS) using Naïve Bayes data mining algorithm. CCDSS can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, spO2,chest pain type, heart rate, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. CCDSS is Webbased, user-friendly, scalable, reliable and expandable. It is implemented on the PHPplatform. Keywords—Computer-Based Clinical Decision Support System(CCDSS), Heart disease, Data mining, Naïve Bayes.",
"title": ""
},
{
"docid": "c0d722d72955dd1ec6df3cc24289979f",
"text": "Citing classic psychological research and a smattering of recent studies, Kassin, Dror, and Kukucka (2013) proposed the operation of a forensic confirmation bias, whereby preexisting expectations guide the evaluation of forensic evidence in a self-verifying manner. In a series of studies, we tested the hypothesis that knowing that a defendant had confessed would taint people's evaluations of handwriting evidence relative to those not so informed. In Study 1, participants who read a case summary in which the defendant had previously confessed were more likely to erroneously conclude that handwriting samples from the defendant and perpetrator were authored by the same person, and were more likely to judge the defendant guilty, compared with those in a no-confession control group. Study 2 replicated and extended these findings using a within-subjects design in which participants rated the same samples both before and after reading a case summary. These findings underscore recent critiques of the forensic sciences as subject to bias, and suggest the value of insulating forensic examiners from contextual information.",
"title": ""
},
{
"docid": "731a3a94245b67df3e362ac80f41155f",
"text": "Opportunistic networking offers many appealing application perspectives from local social-networking applications to supporting communications in remote areas or in disaster and emergency situations. Yet, despite the increasing penetration of smartphones, opportunistic networking is not feasible with most popular mobile devices. There is still no support for WiFi Ad-Hoc and protocols such as Bluetooth have severe limitations (short range, pairing). We believe that WiFi Ad-Hoc communication will not be supported by most popular mobile OSes (i.e., iOS and Android) and that WiFi Direct will not bring the desired features. Instead, we propose WiFi-Opp, a realistic opportunistic setup relying on (i) open stationary APs and (ii) spontaneous mobile APs (i.e., smartphones in AP or tethering mode), a feature used to share Internet access, which we use to enable opportunistic communications. We compare WiFi-Opp to WiFi Ad-Hoc by replaying real-world contact traces and evaluate their performance in terms of capacity for content dissemination as well as energy consumption. While achieving comparable throughput, WiFi-Opp is up to 10 times more energy efficient than its Ad-Hoc counterpart. Eventually, a proof of concept demonstrates the feasibility of WiFi-Opp, which opens new perspectives for opportunistic networking.",
"title": ""
},
{
"docid": "cd8bd76ecebbd939400b4724499f7592",
"text": "Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depthspecific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.",
"title": ""
}
] |
scidocsrr
|
8b9fc1a0fa1e965b8eaa9fecbd5c81ba
|
Gaussian Processes for Classification: Mean-Field Algorithms
|
[
{
"docid": "f2f5495973c560f15c307680bd5d3843",
"text": "The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions . In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.",
"title": ""
},
{
"docid": "089573eaa8c1ad8c7ad244a8ccca4049",
"text": "We consider the problem of assigning an input vector to one of m classes by predicting P(c|x) for c = 1, o, m. For a twoclass problem, the probability of class one given x is estimated by s(y(x)), where s(y) = 1/(1 + ey ). A Gaussian process prior is placed on y(x), and is combined with the training data to obtain predictions for new x points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior; the necessary integration over y is carried out using Laplace’s approximation. The method is generalized to multiclass problems (m > 2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.",
"title": ""
}
] |
[
{
"docid": "5e5574605d6e4573098028b98c45923e",
"text": "In this paper we study the role of trust in enhancing asymmetric partnership formation. First we briefly review the role of trust. Then we analyze the state-of-the-art of the theoretical and empirical literature on trust creation and antecedents for experienced trustworthiness. As a result of the literature review and our knowledge of the context in praxis, we create a model on organizational trust building where the interplay of inter-organizational and inter-personal trust is scrutinized. Potential challenges for our model are first the asymmetry of organizations and actors and secondly the volatility of the business. The opportunity window for partnering firms may be very short i.e. there is not much time for natural development of trust based on incremental investments and social or character similarity, but so called “fast” or “swift” trust is needed. As a managerial contribution we suggest some practices and processes, which could be used for organizational trust building. These are developed from the viewpoint of large organization boundary-spanners (partner/vendor managers) developing asymmetric technology partnerships. Leveraging Complementary Benefits in a Telecom Network Individual specialization and organizational focus on core competencies leads to deep but narrow competencies. Thus complementary knowledge, resources and skills are needed. Ståhle (1998, 85 and 86) explains the mutual interdependence of individuals in a system by noting that actors always belong to social systems, but they may actualize only by relating to others. In order to transfer knowledge and learn social actors need to be able to connect and for that they need to build trust. Also according to Luhmann (1995, 112) each system first tests the bond of trust and only then starts processing the meaning. In line with Arrow (1974) we conclude that ability to build trust is a necessary (even if not sufficient) precondition to relationships in a social system (network). 1 Conceptualized also as “double contingency” (Luhmann 1995, 118). In telecommunications the asymmetric technology partnerships between large incumbent players and specialized suppliers are increasingly common. Technological development and the convergence of information technology, telecommunications and media industry has created potential business areas, where knowledge of complementary players is needed. Complementary capabilities often mean asymmetric partnerships, where partnering firms have different skills, resources and knowledge. Perceived or believed dissimilarities in values, goals, time-horizon, decision-making processes, culture and logic of strategy imply barriers for cooperation to evolve (Doz 1988, Blomqvist 1999). A typical case is a partnership with a large and incumbent telecommunications firm and a small software supplier. The small software firm supplies the incumbent firm with state-of-the-art innovative service applications, which complement the incumbent firm’s platform. Risk and trust are involved in every transaction where the simultaneous exchange is unavailable (Arrow 1973, 24). Companies engaged in a technology partnership exchange and share valuable information, which may not be safeguarded by secrecy agreements. Various types of risks, e.g. failures in technology development, performance or market risk or unintended disclosure of proprietary information and partner's opportunistic behavior in e.g. absorbing and imitating the technology or recruiting key persons are present. Building trust is particularly important for complementary parties to reach the potential network benefits of scale and scope, yet tedious due to asymmetric characteristics. Natural trust creation is constrained as personal and process sources of trust (Zucker 1986) are limited due to partners’ different cultures and short experience from interaction. In organizational relationships the basis of trust must be extended beyond personal and individual relationships (Creed and Miles 1996, Hardy et. al. 1998). In asymmetric technology partnerships the dominant large partner may be tempted to use power to ensure control and authority. Hardy et al. (1998, 82) discuss a potential capitulation of a dependent partner in an asymmetric relationship. This means that the subordinate organization loses its ability to operate in full as a result of anticipated reactions from a more powerful organization. Therefore, as an expected source for spear-edge innovations, it fails to realize its potential in full. Thus the potential for dominant players to leverage the “synergistic creativity” of specialized suppliers realizes only through double-contingency relationships characterized by mutual interdependency and equity (Luhmann 1995). Such relationships may leverage the innovative abilities of small and specialized suppliers, but only if asymmetric partners are able to build organizational trust and subsequently connect with each other. In the telecommunications both the technological and market uncertainty are high. Considerable rewards may be gained, yet the players face considerable risks. There is little time to study the volatile markets or learn the constantly emerging new technologies. In such a turbulent business the players are forced to constant strategizing. Partnerships may have to be decided almost “overnight” and many are of temporary nature. Players in the volatile telecommunications also know that the “shadow-of-the-future” might be surprisingly short, since the various alliances and consortiums are in constant move. Previous research on trust shows that trust develops gradually and common future is a strong motivator for a trusting relationship (e.g. Axelrod 1984). In telecommunications the partnering firms need trust more 2 By asymmetry is meant a non-symmetrical situation between actors. Economists discuss asymmetrical information leading to potential opportunism. Another theme related commonly to asymmetry is power, which is closely linked to company size. In asymmetric technology partnerships asymmetry manifests in different corporate cultures, management and type of resources. In this context asymmetry could be defined as “difference in knowledge, power and culture of actors”. than ever, yet they have little chance to commit themselves gradually to the relationship or experiment the values and goals of the other. Due to great risks the ability to build trust is crucial, yet because of the high volatility and short shadow-of-the future especially challenging. Building trust Trust is seen as a necessary antecedent for cooperation (Axelrod 1984) and leading to constructive and cooperative behavior vital for long-term relationships (Barney 1981, Morgan and Hunt 1994). Trust is vital for both innovative work within the organization in e.g. project teams (Jones and George 1998) and between organizations e.g. strategic alliances (Doz 1999, Zaheer et al. 1998) and R & D partnerships (Dodgson 1993). In this paper trust is defined as \"actor's expectation of the other party's competence, goodwill and behavior\". It is believed that in business context both competence and goodwill levels are needed for trust to develop (Blomqvist 1997). The relevant competence (technical capabilities, skills and knowhow) is a necessary antecedent and base for trust in professional relationships of business context. Especially so in the technology partnership where potential partners are assumed to have technological knowledge and competencies. Signs of goodwill (moral responsibility and positive intentions toward the other) are also necessary for the trusting party to be able to accept a potentially vulnerable position (risk inherent). Positive intentions appear as signs of cooperation and partner’s proactive behavior. Competence Goodwill Goodwill Competence Behavior Behavior Figure 1. Development of trust through layers of trustworthiness Bidault and Jarillo (1997) have added a third dimension to trust i.e. the actual behavior of parties. Goodwill-dimension of trust includes positive intentions toward the other, but along time, when the relationship is developing, the actual behavior e.g. that the trustee fulfills the positive intentions enhances trustworthiness (see Figure 1). Already at the very first meetings the behavioral dimension is present in signs and signals, e.g. what information is revealed and in which manner. In the partnering process (along time) the actual behavior e.g. kept promises become more visible and easier to evaluate. Role of trust has been studied quite extensively and in different contexts (e.g. Larson 1992, Swan 1995, Sydow 1998, Morgan and Hunt 1994, O’Brien 1995). Development of personal trust has been studied among psychologists and socio-psychologists (Deutch 1958, Blau 1966, Rotter 1967 and Good 1988). Development of organizational trust has been studied much less (Halinen 1994, Das and Teng 1998). In this paper we attempt to model interorganizational trust building and suggest some managerial tools to build trust. We build on Anthony Giddens (1984) theory of structuration and a model on experiencing trust by Jones and George (1998). According to social exchange theory (Blau 1966, Whitener et al. 1998 among others) information, advice, social support and recognition are important means in trust building, which is created by repeated interactions and reciprocity. A different view to trust is offered by agency theory developed by economists and focussing in the relationship between principals and agents (e.g. employer and employee). According to agency theory relationship management, e.g. socialization of corporate values, policies and industry norms (e.g. Eisenhardt 1985, 135 and 148) may control moral hazard inherent in such relationships. Researchers disagree whether trust can be intentionally created. According to Sydow (1998) trust is very difficult to develop and sustain. It is however believed that the conditions (processes, routines and settings) affectin",
"title": ""
},
{
"docid": "2706e8ed981478ad4cb2db060b3d9844",
"text": "We develop a technique for transfer learning in machine comprehension (MC) using a novel two-stage synthesis network (SynNet). Given a high-performing MC model in one domain, our technique aims to answer questions about documents in another domain, where we use no labeled data of question-answer pairs. Using the proposed SynNet with a pretrained model on the SQuAD dataset, we achieve an F1 measure of 46.6% on the challenging NewsQA dataset, approaching performance of in-domain models (F1 measure of 50.0%) and outperforming the out-ofdomain baseline by 7.6%, without use of provided annotations.1",
"title": ""
},
{
"docid": "d08529ef66abefda062a414acb278641",
"text": "Spend your few moment to read a book even only few pages. Reading book is not obligation and force for everybody. When you don't want to read, you can get punishment from the publisher. Read a book becomes a choice of your different characteristics. Many people with reading habit will always be enjoyable to read, or on the contrary. For some reasons, this inductive logic programming techniques and applications tends to be the representative book in this website.",
"title": ""
},
{
"docid": "e80212110cc32d51a3782259932a8490",
"text": "In this paper, a new approach for handling fuzzy AHP is introduced, with the use of triangular fuzzy numbers for pairwise comprison scale of fuzzy AHP, and the use of the extent analysis method for the synthetic extent value S i of the pairwise comparison. By applying the principle of the comparison of fuzzy numbers, that is, V ( M l >1 M 2) = 1 iff mj >i m z, V ( M z >/M~) = hgt(M~ A M z) =/xM,(d), the vectors of weight with respect to each element under a certain criterion are represented by d( A i) = min V(S i >1 Sk), k = 1, 2 . . . . . n; k -4= i. This decision process is demonstrated by an example.",
"title": ""
},
{
"docid": "6875d41e412d71f45d6d4ea43697ed80",
"text": "Context Emergency department visits by older adults are often due to adverse drug events, but the proportion of these visits that are the result of drugs designated as inappropriate for use in this population is unknown. Contribution Analyses of a national surveillance study of adverse drug events and a national outpatient survey estimate that Americans age 65 years or older have more than 175000 emergency department visits for adverse drug events yearly. Three commonly prescribed drugs accounted for more than one third of visits: warfarin, insulin, and digoxin. Caution The study was limited to adverse events in the emergency department. Implication Strategies to decrease adverse drug events among older adults should focus on warfarin, insulin, and digoxin. The Editors Adverse drug events cause clinically significant morbidity and mortality and are associated with large economic costs (15). They are common in older adults, regardless of whether they live in the community, reside in long-term care facilities, or are hospitalized (59). Most physicians recognize that prescribing medications to older patients requires special considerations, but nongeriatricians are typically unfamiliar with the most commonly used measure of medication appropriateness for older patients: the Beers criteria (1012). The Beers criteria are a consensus-based list of medications identified as potentially inappropriate for use in older adults. The criteria were introduced in 1991 to help researchers evaluate prescription quality in nursing homes (10). The Beers criteria were updated in 1997 and 2003 to apply to all persons age 65 years or older, to include new medications judged to be ineffective or to pose unnecessarily high risk, and to rate the severity of adverse outcomes (11, 12). Prescription rates of Beers criteria medications have become a widely used measure of quality of care for older adults in research studies in the United States and elsewhere (1326). The application of the Beers criteria as a measure of health care quality and safety has expanded beyond research studies. The Centers for Medicare & Medicaid Services incorporated the Beers criteria into federal safety regulations for long-term care facilities in 1999 (27). The prescription rate of potentially inappropriate medications is one of the few medication safety measures in the National Healthcare Quality Report (28) and has been introduced as a Health Plan and Employer Data and Information Set quality measure for managed care plans (29). Despite widespread adoption of the Beers criteria to measure prescription quality and safety, as well as proposals to apply these measures to additional settings, such as medication therapy management services under Medicare Part D (30), population-based data on the effect of adverse events from potentially inappropriate medications are sparse and do not compare the risks for adverse events from Beers criteria medications against those from other medications (31, 32). Adverse drug events that lead to emergency department visits are clinically significant adverse events (5) and result in increased health care resource utilization and expense (6). We used nationally representative public health surveillance data to estimate the number of emergency department visits for adverse drug events involving Beers criteria medications and compared the number with that for adverse drug events involving other medications. We also estimated the frequency of outpatient prescription of Beers criteria medications and other medications to calculate and compare the risks for emergency department visits for adverse drug events per outpatient prescription visit. Methods Data Sources National estimates of emergency department visits for adverse drug events were based on data from the 58 nonpediatric hospitals participating in the National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance (NEISS-CADES) System, a nationally representative, size-stratified probability sample of hospitals (excluding psychiatric and penal institutions) in the United States and its territories with a minimum of 6 beds and a 24-hour emergency department (Figure 1) (3335). As described elsewhere (5, 34), trained coders at each hospital reviewed clinical records of every emergency department visit to report physician-diagnosed adverse drug events. Coders reported clinical diagnosis, medication implicated in the adverse event, and narrative descriptions of preceding circumstances. Data collection, management, quality assurance, and analyses were determined to be public health surveillance activities by the Centers for Disease Control and Prevention (CDC) and U.S. Food and Drug Administration human subjects oversight bodies and, therefore, did not require human subject review or institutional review board approval. Figure 1. Data sources and descriptions. NAMCS= National Ambulatory Medical Care Survey (36); NEISS-CADES= National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance System (5, 3335); NHAMCS = National Hospital Ambulatory Medical Care Survey (37). *The NEISS-CADES is a 63-hospital national probability sample, but 5 pediatric hospitals were not included in this analysis. National estimates of outpatient prescription were based on 2 cross-sectional surveys, the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS), designed to provide information on outpatient office visits and visits to hospital outpatient clinics and emergency departments (Figure 1) (36, 37). These surveys have been previously used to document the prescription rates of inappropriate medications (17, 3840). Definition of Potentially Inappropriate Medications The most recent iteration of the Beers criteria (12) categorizes 41 medications or medication classes as potentially inappropriate under any circumstances (always potentially inappropriate) and 7 medications or medication classes as potentially inappropriate when used in certain doses, frequencies, or durations (potentially inappropriate in certain circumstances). For example, ferrous sulfate is considered to be potentially inappropriate only when used at dosages greater than 325 mg/d, but not potentially inappropriate if used at lower dosages. For this investigation, we included the Beers criteria medications listed in Table 1. Because medication dose, duration, and frequency were not always available in NEISS-CADES and are not reported in NAMCS and NHAMCS, we included medications regardless of dose, duration, or frequency of use. We excluded 3 medications considered to be potentially inappropriate when used in specific formulations (short-acting nifedipine, short-acting oxybutynin, and desiccated thyroid) because NEISS-CADES, NAMCS, and NHAMCS do not reliably identify these formulations. Table 1. Potentially Inappropriate Medications for Individuals Age 65 Years or Older The updated Beers criteria identify additional medications as potentially inappropriate if they are prescribed to patients who have certain preexisting conditions. We did not include these medications because they have rarely been used in previous studies or safety measures and NEISS-CADES, NAMCS, and NHAMCS do not reliably identify preexisting conditions. Identification of Emergency Department Visits for Adverse Drug Events We defined an adverse drug event case as an incident emergency department visit by a patient age 65 years or older, from 1 January 2004 to 31 December 2005, for a condition that the treating physician explicitly attributed to the use of a drug or for a drug-specific effect (5). Adverse events include allergic reactions (immunologically mediated effects) (41), adverse effects (undesirable pharmacologic or idiosyncratic effects at recommended doses) (41), unintentional overdoses (toxic effects linked to excess dose or impaired excretion) (41), or secondary effects (such as falls and choking). We excluded cases of intentional self-harm, therapeutic failures, therapy withdrawal, drug abuse, adverse drug events that occurred as a result of medical treatment received during the emergency department visit, and follow-up visits for a previously diagnosed adverse drug event. We defined an adverse drug event from Beers criteria medications as an emergency department visit in which a medication from Table 1 was implicated. Identification of Outpatient Prescription Visits We used the NAMCS and NHAMCS public use data files for the most recent year available (2004) to identify outpatient prescription visits. We defined an outpatient prescription visit as any outpatient office, hospital clinic, or emergency department visit at which treatment with a medication of interest was either started or continued. We identified medications by generic name for those with a single active ingredient and by individual active ingredients for combination products. We categorized visits with at least 1 medication identified in Table 1 as involving Beers criteria medications. Statistical Analysis Each NEISS-CADES, NAMCS, and NHAMCS case is assigned a sample weight on the basis of the inverse probability of selection (33, 4244). We calculated national estimates of emergency department visits and prescription visits by summing the corresponding sample weights, and we calculated 95% CIs by using the SURVEYMEANS procedure in SAS, version 9.1 (SAS Institute, Cary, North Carolina), to account for the sampling strata and clustering by site. To obtain annual estimates of visits for adverse events, we divided NEISS-CADES estimates for 20042005 and corresponding 95% CI end points by 2. Estimates based on small numbers of cases (<20 cases for NEISS-CADES and <30 cases for NAMCS and NHAMCS) or with a coefficient of variation greater than 30% are considered statistically unstable and are identified in the tables. To estimate the risk for adverse events relative to outpatient prescription",
"title": ""
},
{
"docid": "bbb33d4eb3894471e446db3e5bb936ab",
"text": "In this article we propose the novel approach to measure anthropometrical features such as height, width of shoulder, circumference of the chest, hip and waist. The sub-pixel processing and convex hull technique are used to efficiently measure the features from 2d image. The SVM technique is used to classify men and women based on measured features. The results of real data processing are presented.",
"title": ""
},
{
"docid": "53df69bf8750a7e97f12b1fcac14b407",
"text": "In photovoltaic (PV) power systems where a set of series-connected PV arrays (PVAs) is connected to a conventional two-level inverter, the occurrence of partial shades and/or the mismatching of PVAs leads to a reduction of the power generated from its potential maximum. To overcome these problems, the connection of the PVAs to a multilevel diode-clamped converter is considered in this paper. A control and pulsewidth-modulation scheme is proposed, capable of independently controlling the operating voltage of each PVA. Compared to a conventional two-level inverter system, the proposed system configuration allows one to extract maximum power, to reduce the devices voltage rating (with the subsequent benefits in device-performance characteristics), to reduce the output-voltage distortion, and to increase the system efficiency. Simulation and experimental tests have been conducted with three PVAs connected to a four-level three-phase diode-clamped converter to verify the good performance of the proposed system configuration and control strategy.",
"title": ""
},
{
"docid": "7d017a5a6116a08cc9009a2f009af120",
"text": "Route Designer, version 1.0, is a new retrosynthetic analysis package that generates complete synthetic routes for target molecules starting from readily available starting materials. Rules describing retrosynthetic transformations are automatically generated from reaction databases, which ensure that the rules can be easily updated to reflect the latest reaction literature. These rules are used to carry out an exhaustive retrosynthetic analysis of the target molecule, in which heuristics are used to mitigate the combinatorial explosion. Proposed routes are prioritized by an empirical rating algorithm to present a diverse profile of the most promising solutions. The program runs on a server with a web-based user interface. An overview of the system is presented together with examples that illustrate Route Designer's utility.",
"title": ""
},
{
"docid": "d6f1278ccb6de695200411137b85b89a",
"text": "The complexity of information systems is increasing in recent years, leading to increased effort for maintenance and configuration. Self-adaptive systems (SASs) address this issue. Due to new computing trends, such as pervasive computing, miniaturization of IT leads to mobile devices with the emerging need for context adaptation. Therefore, it is beneficial that devices are able to adapt context. Hence, we propose to extend the definition of SASs and include context adaptation. This paper presents a taxonomy of self-adaptation and a survey on engineering SASs. Based on the taxonomy and the survey, we motivate a new perspective on SAS including context adaptation.",
"title": ""
},
{
"docid": "7d01463ce6dd7e7e08ebaf64f6916b1d",
"text": "An effective location algorithm, which considers nonline-of-sight (NLOS) propagation, is presented. By using a new variable to replace the square term, the problem becomes a mathematical programming problem, and then the NLOS propagation’s effect can be evaluated. Compared with other methods, the proposed algorithm has high accuracy.",
"title": ""
},
{
"docid": "b4002e27c1c656d71dc4277ea0cca9a9",
"text": "This paper proposes a distributionally robust approach to logistic regression. We use the Wasserstein distance to construct a ball in the space of probability distributions centered at the uniform distribution on the training samples. If the radius of this ball is chosen judiciously, we can guarantee that it contains the unknown datagenerating distribution with high confidence. We then formulate a distributionally robust logistic regression model that minimizes a worst-case expected logloss function, where the worst case is taken over all distributions in the Wasserstein ball. We prove that this optimization problem admits a tractable reformulation and encapsulates the classical as well as the popular regularized logistic regression problems as special cases. We further propose a distributionally robust approach based on Wasserstein balls to compute upper and lower confidence bounds on the misclassification probability of the resulting classifier. These bounds are given by the optimal values of two highly tractable linear programs. We validate our theoretical out-of-sample guarantees through simulated and empirical experiments.",
"title": ""
},
{
"docid": "f762e0937878a406e4200ab72d4d3463",
"text": "In this paper, patent pending substrate integrated waveguide (SIW) bandpass filters with moderate fractional bandwidth and improved stopband performance are proposed and demonstrated for a Ka-band satellite ground terminal. Nonphysical cross-coupling provided by higher order modes in the oversized SIW cavities is used to generate the finite transmission zeros far away from the passband for improved stopband performance. Different input/output topologies of the filter are discussed for wide stopband applications. Design considerations including the design approach, filter configuration, and tolerance analysis are addressed. Two fourth-order filters with a passband of 19.2-21.2 GHz are fabricated on a single-layer Rogers RT/Duroid 6002 substrate using linear arrays of metallized via-holes by a standard printed circuit board process. Measured results of the two filters agree very well with simulated results, showing the in-band insertion loss is 0.9 dB or better, and the stopband attenuation in the frequency band of 29.5-30 GHz is better than 50 dB. Measurements over a temperature range of -20degC to +40degC show the passband remains almost unchanged.",
"title": ""
},
{
"docid": "34508dac189b31c210d461682fed9f67",
"text": "Life is more than cat pictures. There are tough days, heartbreak, and hugs. Under what contexts do people share these feelings online, and how do their friends respond? Using millions of de-identified Facebook status updates with poster-annotated feelings (e.g., “feeling thankful” or “feeling worried”), we examine the magnitude and circumstances in which people share positive or negative feelings and characterize the nature of the responses they receive. We find that people share greater proportions of both positive and negative emotions when their friend networks are smaller and denser. Consistent with social sharing theory, hearing about a friendâs troubles on Facebook causes friends to reply with more emotional and supportive comments. Friendsâ comments are also more numerous and longer. Posts with positive feelings, on the other hand, receive more likes, and their comments have more positive language. Feelings that relate to the posterâs self worth, such as “feeling defeated,” “feeling unloved,” or “feeling accomplished” amplify these effects.",
"title": ""
},
{
"docid": "6e4798c01a0a241d1f3746cd98ba9421",
"text": "BACKGROUND\nLarge blood-based prospective studies can provide reliable assessment of the complex interplay of lifestyle, environmental and genetic factors as determinants of chronic disease.\n\n\nMETHODS\nThe baseline survey of the China Kadoorie Biobank took place during 2004-08 in 10 geographically defined regions, with collection of questionnaire data, physical measurements and blood samples. Subsequently, a re-survey of 25,000 randomly selected participants was done (80% responded) using the same methods as in the baseline. All participants are being followed for cause-specific mortality and morbidity, and for any hospital admission through linkages with registries and health insurance (HI) databases.\n\n\nRESULTS\nOverall, 512,891 adults aged 30-79 years were recruited, including 41% men, 56% from rural areas and mean age was 52 years. The prevalence of ever-regular smoking was 74% in men and 3% in women. The mean blood pressure was 132/79 mmHg in men and 130/77 mmHg in women. The mean body mass index (BMI) was 23.4 kg/m(2) in men and 23.8 kg/m(2) in women, with only 4% being obese (>30 kg/m(2)), and 3.2% being diabetic. Blood collection was successful in 99.98% and the mean delay from sample collection to processing was 10.6 h. For each of the main baseline variables, there is good reproducibility but large heterogeneity by age, sex and study area. By 1 January 2011, over 10,000 deaths had been recorded, with 91% of surviving participants already linked to HI databases.\n\n\nCONCLUSION\nThis established large biobank will be a rich and powerful resource for investigating genetic and non-genetic causes of many common chronic diseases in the Chinese population.",
"title": ""
},
{
"docid": "c75c8461134f3ad5855ef30a49f377fb",
"text": "Suspicious human activity recognition from surveillance video is an active research area of image processing and computer vision. Through the visual surveillance, human activities can be monitored in sensitive and public areas such as bus stations, railway stations, airports, banks, shopping malls, school and colleges, parking lots, roads, etc. to prevent terrorism, theft, accidents and illegal parking, vandalism, fighting, chain snatching, crime and other suspicious activities. It is very difficult to watch public places continuously, therefore an intelligent video surveillance is required that can monitor the human activities in real-time and categorize them as usual and unusual activities; and can generate an alert. Recent decade witnessed a good number of publications in the field of visual surveillance to recognize the abnormal activities. Furthermore, a few surveys can be seen in the literature for the different abnormal activities recognition; but none of them have addressed different abnormal activities in a review. In this paper, we present the state-of-the-art which demonstrates the overall progress of suspicious activity recognition from the surveillance videos in the last decade. We include a brief introduction of the suspicious human activity recognition with its issues and challenges. This paper consists of six abnormal activities such as abandoned object detection, theft detection, fall detection, accidents and illegal parking detection on road, violence activity detection, and fire detection. In general, we have discussed all the steps those have been followed to recognize the human activity from the surveillance videos in the literature; such as foreground object extraction, object detection based on tracking or non-tracking methods, feature extraction, classification; activity analysis and recognition. The objective of this paper is to provide the literature review of six different suspicious activity recognition systems with its general framework to the researchers of this field.",
"title": ""
},
{
"docid": "9fe531efea8a42f4fff1fe0465493223",
"text": "Time series classification has been around for decades in the data-mining and machine learning communities. In this paper, we investigate the use of convolutional neural networks (CNN) for time series classification. Such networks have been widely used in many domains like computer vision and speech recognition, but only a little for time series classification. We design a convolutional neural network that consists of two convolutional layers. One drawback with CNN is that they need a lot of training data to be efficient. We propose two ways to circumvent this problem: designing data-augmentation techniques and learning the network in a semi-supervised way using training time series from different datasets. These techniques are experimentally evaluated on a benchmark of time series datasets.",
"title": ""
},
{
"docid": "5cbc93a9844fcd026a1705ee031c6530",
"text": "Accompanying the rapid urbanization, many developing countries are suffering from serious air pollution problem. The demand for predicting future air quality is becoming increasingly more important to government's policy-making and people's decision making. In this paper, we predict the air quality of next 48 hours for each monitoring station, considering air quality data, meteorology data, and weather forecast data. Based on the domain knowledge about air pollution, we propose a deep neural network (DNN)-based approach (entitled DeepAir), which consists of a spatial transformation component and a deep distributed fusion network. Considering air pollutants' spatial correlations, the former component converts the spatial sparse air quality data into a consistent input to simulate the pollutant sources. The latter network adopts a neural distributed architecture to fuse heterogeneous urban data for simultaneously capturing the factors affecting air quality, e.g. meteorological conditions. We deployed DeepAir in our AirPollutionPrediction system, providing fine-grained air quality forecasts for 300+ Chinese cities every hour. The experimental results on the data from three-year nine Chinese-city demonstrate the advantages of DeepAir beyond 10 baseline methods. Comparing with the previous online approach in AirPollutionPrediction system, we have 2.4%, 12.2%, 63.2% relative accuracy improvements on short-term, long-term and sudden changes prediction, respectively.",
"title": ""
},
{
"docid": "2a60bb7773d2e5458de88d2dc0e78e54",
"text": "Many system errors do not emerge unless some intricate sequence of events occurs. In practice, this means that most systems have errors that only trigger after days or weeks of execution. Model checking [4] is an effective way to find such subtle errors. It takes a simplified description of the code and exhaustively tests it on all inputs, using techniques to explore vast state spaces efficiently. Unfortunately, while model checking systems code would be wonderful, it is almost never done in practice: building models is just too hard. It can take significantly more time to write a model than it did to write the code. Furthermore, by checking an abstraction of the code rather than the code itself, it is easy to miss errors.The paper's first contribution is a new model checker, CMC, which checks C and C++ implementations directly, eliminating the need for a separate abstract description of the system behavior. This has two major advantages: it reduces the effort to use model checking, and it reduces missed errors as well as time-wasting false error reports resulting from inconsistencies between the abstract description and the actual implementation. In addition, changes in the implementation can be checked immediately without updating a high-level description.The paper's second contribution is demonstrating that CMC works well on real code by applying it to three implementations of the Ad-hoc On-demand Distance Vector (AODV) networking protocol [7]. We found 34 distinct errors (roughly one bug per 328 lines of code), including a bug in the AODV specification itself. Given our experience building systems, it appears that the approach will work well in other contexts, and especially well for other networking protocols.",
"title": ""
},
{
"docid": "ab2096798261a8976846c5f72eeb18ee",
"text": "ion Description and Purpose Variable names Provide human readable names to data addresses Function names Provide human readable names to function addresses Control structures Eliminate ‘‘spaghetti’’ code (The ‘‘goto’’ statement is no longer necessary.) Argument passing Default argument values, keyword specification of arguments, variable length argument lists, etc. Data structures Allow conceptual organization of data Data typing Binds the type of the data to the type of the variable Static Insures program correctness, sacrificing generality. Dynamic Greater generality, sacrificing guaranteed correctness. Inheritance Allows creation of families of related types and easy re-use of common functionality Message dispatch Providing one name to multiple implementations of the same concept Single dispatch Dispatching to a function based on the run-time type of one argument Multiple dispatch Dispatching to a function based on the run-time type of multiple arguments. Predicate dispatch Dispatching to a function based on run-time state of arguments Garbage collection Automated memory management Closures Allow creation, combination, and use of functions as first-class values Lexical binding Provides access to values in the defining context Dynamic binding Provides access to values in the calling context (.valueEnvir in SC) Co-routines Synchronous cooperating processes Threads Asynchronous processes Lazy evaluation Allows the order of operations not to be specified. Infinitely long processes and infinitely large data structures can be specified and used as needed. Applying Language Abstractions to Computer Music The SuperCollider language provides many of the abstractions listed above. SuperCollider is a dynamically typed, single-inheritance, single-argument dispatch, garbage-collected, object-oriented language similar to Smalltalk (www.smalltalk.org). In SuperCollider, everything is an object, including basic types like letters and numbers. Objects in SuperCollider are organized into classes. The UGen class provides the abstraction of a unit generator, and the Synth class represents a group of UGens operating as a group to generate output. An instrument is constructed functionally. That is, when one writes a sound-processing function, one is actually writing a function that creates and connects unit generators. This is different from a procedural or static object specification of a network of unit generators. Instrument functions in SuperCollider can generate the network of unit generators using the full algorithmic capability of the language. For example, the following code can easily generate multiple versions of a patch by changing the values of the variables that specify the dimensions (number of exciters, number of comb delays, number of allpass delays). In a procedural language like Csound or a ‘‘wire-up’’ environment like Max, a different patch would have to be created for different values for the dimensions of the patch.",
"title": ""
},
{
"docid": "95a74edfac2336ed113eeec04715a5ea",
"text": "Remote sensing images obtained by remote sensing are a key source of data for studying large-scale geographic areas. From 2013 onwards, a new generation of land remote sensing satellites from USA, China, Brazil, India and Europe will produce in one year as much data as 5 years of the Landsat-7 satellite. Thus, the research community needs new ways to analyze large data sets of remote sensing imagery. To address this need, this paper describes a toolbox for combing land remote sensing image analysis with data mining techniques. Data mining methods are being extensively used for statistical analysis, but up to now have had limited use in remote sensing image interpretation due to the lack of appropriate tools. The toolbox described in this paper is the Geographic Data Mining Analyst (GeoDMA). It has algorithms for segmentation, feature extraction, feature selection, classification, landscape metrics and multi-temporal methods for change detection and analysis. GeoDMA uses decision-tree strategies adapted for spatial data mining. It connects remotely sensed imagery with other geographic data types using access to local or remote databases. GeoDMA has methods to assess the accuracy of simulation models, as well as tools for spatio-temporal analysis, including a visualization of time-series that helps users to find patterns in cyclic events. The software includes a new approach for analyzing spatio-temporal data based on polar coordinates transformation. This method creates a set of descriptive features that improves the classification accuracy of multi-temporal image databases. GeoDMA is tightly integrated with TerraView GIS, so its users have access to all traditional GIS features. To demonstrate GeoDMA, we show two case studies on land use and land cover change.",
"title": ""
}
] |
scidocsrr
|
e6461fbf72b6f658bfed095ebad105aa
|
A Multi-Stage Strategy to Perspective Rectification for Mobile Phone Camera-Based Document Images
|
[
{
"docid": "a798db9dfcfec4b8149de856c7e69b48",
"text": "Compared to scanned images, document pictures captured by camera can suffer from distortions due to perspective and page warping. It is necessary to restore a frontal planar view of the page before other OCR techniques can be applied. In this paper we describe a novel approach for flattening a curved document in a single picture captured by an uncalibrated camera. To our knowledge this is the first reported method able to process general curved documents in images without camera calibration. We propose to model the page surface by a developable surface, and exploit the properties (parallelism and equal line spacing) of the printed textual content on the page to recover the surface shape. Experiments show that the output images are much more OCR friendly than the original ones. While our method is designed to work with any general developable surfaces, it can be adapted for typical special cases including planar pages, scans of thick books, and opened books.",
"title": ""
}
] |
[
{
"docid": "749f79007256f570b73983b8d3f36302",
"text": "This paper addresses some of the potential benefits of using fuzzy logic controllers to control an inverted pendulum system. The stages of the development of a fuzzy logic controller using a four input Takagi-Sugeno fuzzy model were presented. The main idea of this paper is to implement and optimize fuzzy logic control algorithms in order to balance the inverted pendulum and at the same time reducing the computational time of the controller. In this work, the inverted pendulum system was modeled and constructed using Simulink and the performance of the proposed fuzzy logic controller is compared to the more commonly used PID controller through simulations using Matlab. Simulation results show that the fuzzy logic controllers are far more superior compared to PID controllers in terms of overshoot, settling time and response to parameter changes.",
"title": ""
},
{
"docid": "1e669e8c849c92ee52b573e0e19ea0f0",
"text": "We model a retailer’s assortment planning problem under a ranking-based choice model of consumer preferences. Under this consumer choice model each customer belongs to a type, where a type is a ranking of the potential products by the order of preference, and the customer purchases his highest ranked product (if any) offered in the assortment. In our model we consider products with different price/cost parameters, we assume that the retailer incurs a fixed carrying cost per product offered, a substitution penalty for each customer who does not purchase his first choice and a lost sale penalty cost for each customer who leaves the store empty-handed. In the absence of any restrictions on the consumer types, searching for the optimal assortment using enumeration or integer programming is not practically feasible. The optimal assortment has very little structure so that simple greedy-type heuristics often fail to find the optimal assortment and have very poor worst case bounds. We develop an effective algorithm, called the In-Out Algorithm, which always provides an optimal solution and show numerically that it is very fast, e.g., more than 10,000 times faster than enumeration on a problem with 20 products.",
"title": ""
},
{
"docid": "24957794ed251c2e970d787df6d87064",
"text": "Glyph as a powerful multivariate visualization technique is used to visualize data through its visual channels. To visualize 3D volumetric dataset, glyphs are usually placed on 2D surface, such as the slicing plane or the feature surface, to avoid occluding each other. However, the 3D spatial structure of some features may be missing. On the other hand, placing large number of glyphs over the entire 3D space results in occlusion and visual clutter that make the visualization ineffective. To avoid the occlusion, we propose a view-dependent interactive 3D lens that removes the occluding glyphs by pulling the glyphs aside through the animation. We provide two space deformation models and two lens shape models to displace the glyphs based on their spatial distributions. After the displacement, the glyphs around the user-interested region are still visible as the context information, and their spatial structures are preserved. Besides, we attenuate the brightness of the glyphs inside the lens based on their depths to provide more depth cue. Furthermore, we developed an interactive glyph visualization system to explore different glyph-based visualization applications. In the system, we provide a few lens utilities that allows users to pick a glyph or a feature and look at it from different view directions. We compare different display/interaction techniques to visualize/manipulate our lens and glyphs.",
"title": ""
},
{
"docid": "d7bc62e7fca922f9b97e42deff85d010",
"text": "In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary.",
"title": ""
},
{
"docid": "e948583ef067952fa8c968de5e5ae643",
"text": "A key problem in learning representations of multiple objects from unlabeled images is that it is a priori impossible to tell which part of the image corresponds to each individual object, and which part is irrelevant clutter. Distinguishing individual objects in a scene would allow unsupervised learning of multiple objects from unlabeled images. There is psychophysical and neurophysiological evidence that the brain employs visual attention to select relevant parts of the image and to serialize the perception of individual objects. We propose a method for the selection of salient regions likely to contain objects, based on bottom-up visual attention. By comparing the performance of David Lowe s recognition algorithm with and without attention, we demonstrate in our experiments that the proposed approach can enable one-shot learning of multiple objects from complex scenes, and that it can strongly improve learning and recognition performance in the presence of large amounts of clutter. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "c29a2429d6dd7bef7761daf96a29daaf",
"text": "In this meta-analysis, we synthesized data from published journal articles that investigated viewers’ enjoyment of fright and violence. Given the limited research on this topic, this analysis was primarily a way of summarizing the current state of knowledge and developing directions for future research. The studies selected (a) examined frightening or violent media content; (b) used self-report measures of enjoyment or preference for such content (the dependent variable); and (c) included independent variables that were given theoretical consideration in the literature. The independent variables examined were negative affect and arousal during viewing, empathy, sensation seeking, aggressiveness, and the respondents’ gender and age. The analysis confirmed that male viewers, individuals lower in empathy, and those higher in sensation seeking and aggressiveness reported more enjoyment of fright and violence. Some support emerged for Zillmann’s (1980, 1996) model of suspense enjoyment. Overall, the results demonstrate the importance of considering how viewers interpret or appraise their reactions to fright and violence. However, the studies were so diverse in design and measurement methods that it was difficult to identify the underlying processes. Suggestions are proposed for future research that will move toward the integration of separate lines of inquiry in a unified approach to understanding entertainment. MEDIA PSYCHOLOGY, 7, 207–237 Copyright © 2005, Lawrence Erlbaum Associates, Inc.",
"title": ""
},
{
"docid": "5ba3baabc84d02f0039748a4626ace36",
"text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.",
"title": ""
},
{
"docid": "ea2af110b27015b83659182948a32b36",
"text": "BACKGROUND\nDescent of the lateral aspect of the brow is one of the earliest signs of aging. The purpose of this study was to describe an open surgical technique for lateral brow lifts, with the goal of achieving reliable, predictable, and long-lasting results.\n\n\nMETHODS\nAn incision was made behind and parallel to the temporal hairline, and then extended deeper through the temporoparietal fascia to the level of the deep temporal fascia. Dissection was continued anteriorly on the surface of the deep temporal fascia and subperiosteally beyond the temporal crest, to the level of the superolateral orbital rim. Fixation of the lateral brow and tightening of the orbicularis oculi muscle was achieved with the placement of sutures that secured the tissue directly to the galea aponeurotica on the lateral aspect of the incision. An additional fixation was made between the temporoparietal fascia and the deep temporal fascia, as well as between the temporoparietal fascia and the galea aponeurotica. The excess skin in the temporal area was excised and the incision was closed.\n\n\nRESULTS\nA total of 519 patients were included in the study. Satisfactory lateral brow elevation was obtained in most of the patients (94.41%). The following complications were observed: total relapse (n=8), partial relapse (n=21), neurapraxia of the frontal branch of the facial nerve (n=5), and limited alopecia in the temporal incision (n=9).\n\n\nCONCLUSIONS\nWe consider this approach to be a safe and effective procedure, with long-lasting results.",
"title": ""
},
{
"docid": "5688bb564d7bd172be1aacc994305137",
"text": "Spain is one of the largest and most successful powers in international youth football, but this success has not extended to the national team. This lack of continued success seems to indicate a loss of potential. The relative age effect has been detected in football in many countries. Understanding the extent of this bias in the youth teams of Spanish elite clubs may help to improve selection processes and reduce the waste of potential. Comparisons between players from: the Spanish Professional Football League, all age categories of these clubs' youth teams, the Under-17 to Under-21 national teams, the national team, and the Spanish population, show a constant tendency to under-represent players from the later months of the selection year at all age groups of youth and Under-17 to Under-21 national teams. Professional and national team players show a similar but diminished behaviour that weakens with ageing, which suggests that talent identification and selection processes can be improved to help better identify potential talent early on and minimize wasted potential.",
"title": ""
},
{
"docid": "ff705a36e71e2aa898e99fbcfc9ec9d2",
"text": "This paper presents a design concept for smart home automation system based on the idea of the internet of things (IoT) technology. The proposed system has two scenarios where first one is denoted as a wireless based and the second is a wire-line based scenario. Each scenario has two operational modes for manual and automatic use. In Case of the wireless scenario, Arduino-Uno single board microcontroller as a central controller for home appliances is applied. Cellular phone with Matlab-GUI platform for monitoring and controlling processes through Wi-Fi communication technology is addressed. For the wire-line scenario, field-programmable gate array (FPGA) kit as a main controller is used. Simulation and hardware realization for the proposed system show its reliability and effectiveness.",
"title": ""
},
{
"docid": "8a36b081bb9dc9b9ed4eb9f6796c7fdb",
"text": "Almost all problems in computer vision are related in one form or an other to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most commonly used techniques for parameter es timation. These include linear least-squares (pseudo-inverse and eigen analysis); orthogonal least-squares; gradient-weighted least-squares; bias-corrected renormal ization; Kalman ltering; and robust techniques (clustering, regression diagnostics, M-estimators, least median of squares). Particular attention has been devoted to discussions about the choice of appropriate minimization criteria and the robustness of the di erent techniques. Their application to conic tting is described. Key-words: Parameter estimation, Least-squares, Bias correction, Kalman lter ing, Robust regression Updated on April 15, 1996 To appear in Image and Vision Computing Journal, 1996",
"title": ""
},
{
"docid": "2959be17f8186f6db5c479d39cc928db",
"text": "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8% and 40.5% respectively on PASCAL VOC 2009.",
"title": ""
},
{
"docid": "4ea7fba21969fcdd2de9b4e918583af8",
"text": "Due to the explosion in the size of the WWW[1,4,5] it becomes essential to make the crawling process parallel. In this paper we present an architecture for a parallel crawler that consists of multiple crawling processes called as C-procs which can run on network of workstations. The proposed crawler is scalable, is resilient against system crashes and other event. The aim of this architecture is to efficiently and effectively crawl the current set of publically indexable web pages so that we can maximize the download rate while minimizing the overhead from parallelization",
"title": ""
},
{
"docid": "8787335d8f5a459dc47b813fd385083b",
"text": "Human papillomavirus infection can cause a variety of benign or malignant oral lesions, and the various genotypes can cause distinct types of lesions. To our best knowledge, there has been no report of 2 different human papillomavirus-related oral lesions in different oral sites in the same patient before. This paper reported a patient with 2 different oral lesions which were clinically and histologically in accord with focal epithelial hyperplasia and oral papilloma, respectively. Using DNA extracted from these 2 different lesions, tissue blocks were tested for presence of human papillomavirus followed by specific polymerase chain reaction testing for 6, 11, 13, 16, 18, and 32 subtypes in order to confirm the clinical diagnosis. Finally, human papillomavirus-32-positive focal epithelial hyperplasia accompanying human papillomavirus-16-positive oral papilloma-like lesions were detected in different sites of the oral mucosa. Nucleotide sequence sequencing further confirmed the results. So in our clinical work, if the simultaneous occurrences of different human papillomavirus associated lesions are suspected, the multiple biopsies from different lesions and detection of human papillomavirus genotype are needed to confirm the diagnosis.",
"title": ""
},
{
"docid": "b50fb31e9c9bbf5f77b54bb048c0a025",
"text": "Companies use Facebook fan pages to promote their products or services. Recent research shows that user(UGC) and marketer-generated content (MGC) created on fan pages affect online sales. But it is still unclear how exactly they affect consumers during their purchase process. We analyze field data from a large German e-tailer to investigate the effects of UGC and MGC in a multi-stage model of purchase decision processes: awareness creation, interest stimulation, and final purchase decision. We find that MGC and UGC create awareness by attracting users to the fan page. Increased numbers of active users stimulate user interest, and more users visit the e-tailer’s online shop. Neutral UGC increase the conversion rate of online shop visitors. Comparisons between one-, twoand three-stage modes show that neglecting one or two stages hides several important effects of MGC and UGC on consumers and ultimately leads to inaccurate predictions of key business figures.",
"title": ""
},
{
"docid": "e3459bb93bb6f7af75a182472bb42b3e",
"text": "We consider the algorithmic problem of selecting a set of target nodes that cause the biggest activation cascade in a network. In case when the activation process obeys the diminishing return property, a simple hill-climbing selection mechanism has been shown to achieve a provably good performance. Here we study models of influence propagation that exhibit critical behavior and where the property of diminishing returns does not hold. We demonstrate that in such systems the structural properties of networks can play a significant role. We focus on networks with two loosely coupled communities and show that the double-critical behavior of activation spreading in such systems has significant implications for the targeting strategies. In particular, we show that simple strategies that work well for homogenous networks can be overly suboptimal and suggest simple modification for improving the performance by taking into account the community structure.",
"title": ""
},
{
"docid": "ee11c968b4280f6da0b1c0f4544bc578",
"text": "A report is presented of some results of an ongoing project using neural-network modeling and learning techniques to search for and decode nonlinear regularities in asset price movements. The author focuses on the case of IBM common stock daily returns. Having to deal with the salient features of economic data highlights the role to be played by statistical inference and requires modifications to standard learning techniques which may prove useful in other contexts.<<ETX>>",
"title": ""
},
{
"docid": "cb21480098892e58157fe3789a363686",
"text": "The paper addresses the problem of real-word spell checking, i.e., the detection and correction of typos that result in real words of the target language. The paper proposes a methodology based on a mixed trigrams language model. Davide Fossati, Barbara Di Eugenio Spell Checking Introduction A Mixed Trigrams Approach Experimental settings Experimental results Conclusion Introduction Spell checking is the process of finding misspelled words in a written text, and possibly correcting them. We can classify spelling errors in two main groups: Non-word errors, which are spelling errors that result in words that do not exist in the language. E.g. “The bok was on the table.” Real-word errors are errors that by chance end up as actual words. E.g. “I saw tree tress in the park.” Detecting and correcting real-word errors is the main focus of the paper. Davide Fossati, Barbara Di Eugenio Spell Checking Introduction A Mixed Trigrams Approach Experimental settings Experimental results Conclusion Approaches Different approaches to real-word spell checking are present in the literature, e.g. Symbolic approaches check for grammatical anomalies. Statistical methods using n-gram models, PoS tagging, e.t.c. Davide Fossati, Barbara Di Eugenio Spell Checking Introduction A Mixed Trigrams Approach Experimental settings Experimental results Conclusion Problems with statistical methods The problem with statistical methods using only n-grams is the data sparseness problem. PoS methods suffer less from sparseness problems, but are unable to detect misspelled that are of the same part of speech. Davide Fossati, Barbara Di Eugenio Spell Checking Introduction A Mixed Trigrams Approach Experimental settings Experimental results Conclusion Mixed Trigrams Confusion set Levensthein Distance Method",
"title": ""
},
{
"docid": "66af4d496e98e4b407922fbe9970a582",
"text": "Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialogue-specific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TFIDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.",
"title": ""
},
{
"docid": "7ca6eafe5d4c3f7854f7104e780b5962",
"text": "For half a century since computers came into existence, the goal of finding elegant and efficient algorithms to solve “simple” (welldefined and well-structured) problems has dominated algorithm design. Over the same time period, both processing and storage capacity of computers have increased by roughly a factor of a million. The next few decades may well give us a similar rate of growth in raw computing power, due to various factors such as continuing miniaturization, parallel and distributed computing. If a quantitative change of orders of magnitude leads to qualitative advances, where will the latter take place? Only empirical research can answer this question. Asymptotic complexity theory has emerged as a surprisingly effective tool for predicting run times of polynomial-time algorithms. For NPhard problems, on the other hand, it yields overly pessimistic bounds. It asserts the non-existence of algorithms that are efficient across an entire problem class, but ignores the fact that many instances, perhaps including those of interest, can be solved efficiently. For such cases we need a complexity measure that applies to problem instances, rather than to over-sized problem classes. Combinatorial optimization and enumeration problems are modeled by state spaces that usually lack any regular structure. Exhaustive search is often the only way to handle such “combinatorial chaos”. Several general purpose search algorithms are used under different circumstances. We describe reverse search and illustrate this technique on a case study of enumerative optimization: enumerating the k shortest Euclidean spanning trees. 1 Catching Up with Technology Computer science is technology-driven: it has been so for the past 50 years, and will remain this way for the foreseeable future, for at least a decade. That is about as far as specialists can extrapolate current semiconductor technology and foresee that advances based on refined processes, without any need for fundamental innovations, will keep improving the performance of computing devices. Moreover, performance can be expected to advance at the rate of “Moore’s law”, V. Hlaváč, K. G. Jeffery, and J. Wiedermann (Eds.): SOFSEM 2000, LNCS 1963, pp. 18–35, 2000. c © Springer-Verlag Berlin Heidelberg 2000 Potential of Raw Computer Power 19 the same rate observed over the past 3 decades, of doubling in any period of 1 to 2 years. An up-to-date summary of possibilities and limitations of technology can be found in [12]. What does it mean for a discipline to be technology-driven? What are the implications? Consider the converse: disciplines that are demand-driven rather than technology-driven. In the 60s the US stated a public goal “to put a man on the moon by the end of the decade”. This well-defined goal called for a technology that did not as yet exist. With a great national effort and some luck, the technology was developed just in time to meet the announced goal—a memorable technical achievement. More often than not, when an ambitious goal calls for the invention of new technology, the goal remains wishful thinking. In such situations, it is prudent to announce fuzzy goals, where one can claim progress without being able to measure it. The computing field has on occasion been tempted to use this tactic, for example in predicting “machine intelligence”. Such an elastic concept can be re-defined periodically to mirror the current state-of-the-art. Apart from some exceptions, the dominant influence on computing has been a technology push, rather than a demand pull. In other words, computer architects, systems and application designers have always known that clock rates, flop/s, data rates and memory sizes will go up at a predictable, breath-taking speed. The question was less “what do we need to meet the demands of a new application?” as “what shall we do with the newly emerging resources?”. Faced with an embarassment of riches, it is understandable that the lion’s share of development effort, both in industry and in academia, has gone into developing bigger, more powerful, hopefully a little better versions of the same applications that have been around for decades. What we experience as revolutionary in the break-neck speed with which computing is affecting society is not technical novelty, but rather, an unprecedented penetration of computing technology in all aspects of the technical infrastructure on which our civilization has come to rely. In recent years, computing technology’s outstanding achievement has been the breadth of its impact rather than the originality and depth of its scientific/technical innovations. The explosive spread of the Internet in recent years, based on technology developed a quarter-century ago, is a prominent example. This observation, that computing technology in recent years has been “spreading inside known territory” rather than “growing into new areas”, does not imply that computing has run out of important open problems or new ideas. On the contrary, tantalizing open questions and new ideas call for investigation, as the following two examples illustrate: 1. A challenging, fundamental open problem in our “information age”, is a scientific definition of information. Shannon’s pioneering information theory is of unquestioned importance, but it does not capture the notion of “information” relevant in daily life (“what is your telephone number?”) or in business transactions (“what is today’s exchange rate”). The fact that we process information at all times without having a scientific definition of what we are processing is akin to the state of physics before Newton: humanity",
"title": ""
}
] |
scidocsrr
|
6a59e3be455be852d3d04e46ad6f3d1d
|
IEEE 802.15.4 security sublayer for OMNET++
|
[
{
"docid": "beec3b6b4e5ecaa05d6436426a6d93b7",
"text": "This paper introduces a 6LoWPAN simulation model for OMNeT++. Providing a 6LoWPAN model is an important step to advance OMNeT++-based Internet of Things simulations. We integrated Contiki’s 6LoWPAN implementation into OMNeT++ in order to avoid problems of non-standard compliant, non-interoperable, or highly abstracted and thus unreliable simulation models. The paper covers the model’s structure as well as its integration and the generic interaction between OMNeT++ / INET and Contiki.",
"title": ""
}
] |
[
{
"docid": "3c08e42ad9e6a2f2e7a29a187d8a791e",
"text": "An integrated single-inductor dual-output boost converter is presented. This converter adopts time-multiplexing control in providing two independent supply voltages (3.0 and 3.6 V) using only one 1H off-chip inductor and a single control loop. This converter is analyzed and compared with existing counterparts in the aspects of integration, architecture, control scheme, and system stability. Implementation of the power stage, the controller, and the peripheral functional blocks is discussed. The design was fabricated with a standard 0.5m CMOS n-well process. At an oscillator frequency of 1 MHz, the power conversion efficiency reaches 88.4% at a total output power of 350 mW. This topology can be extended to have multiple outputs and can be applied to buck, flyback, and other kinds of converters.",
"title": ""
},
{
"docid": "a9612aacde205be2d753c5119b9d95d3",
"text": "We propose a multi-object multi-camera framework for tracking large numbers of tightly-spaced objects that rapidly move in three dimensions. We formulate the problem of finding correspondences across multiple views as a multidimensional assignment problem and use a greedy randomized adaptive search procedure to solve this NP-hard problem efficiently. To account for occlusions, we relax the one-to-one constraint that one measurement corresponds to one object and iteratively solve the relaxed assignment problem. After correspondences are established, object trajectories are estimated by stereoscopic reconstruction using an epipolar-neighborhood search. We embedded our method into a tracker-to-tracker multi-view fusion system that not only obtains the three-dimensional trajectories of closely-moving objects but also accurately settles track uncertainties that could not be resolved from single views due to occlusion. We conducted experiments to validate our greedy assignment procedure and our technique to recover from occlusions. We successfully track hundreds of flying bats and provide an analysis of their group behavior based on 150 reconstructed 3D trajectories.",
"title": ""
},
{
"docid": "ccce778a661b2f4a1689da1ac190b2a6",
"text": "Neural Networks sequentially build high-level features through their successive layers. We propose here a new neural network model where each layer is associated with a set of candidate mappings. When an input is processed, at each layer, one mapping among these candidates is selected according to a sequential decision process. The resulting model is structured according to a DAG like architecture, so that a path from the root to a leaf node defines a sequence of transformations. Instead of considering global transformations, like in classical multilayer networks, this model allows us for learning a set of local transformations. It is thus able to process data with different characteristics through specific sequences of such local transformations, increasing the expression power of this model w.r.t a classical multilayered network. The learning algorithm is inspired from policy gradient techniques coming from the reinforcement learning domain and is used here instead of the classical back-propagation based gradient descent techniques. Experiments on different datasets show the relevance of this approach.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "4d75db0597f4ca4d4a3abba398e99cb4",
"text": "Coverage path planning determines a path that guides an autonomous vehicle to pass every part of a workspace completely and efficiently. Since turns are often costly for autonomous vehicles, minimizing the number of turns usually produces more working efficiency. This paper presents an optimization approach to minimize the number of turns of autonomous vehicles in coverage path planning. For complex polygonal fields, the problem is reduced to finding the optimal decomposition of the original field into simple subfields. The optimization criterion is minimization of the sum of widths of these decomposed subfields. Here, a new algorithm is designed based on a multiple sweep line decomposition. The time complexity of the proposed algorithm is O(n2 log n). Experiments show that the proposed algorithm can provide nearly optimal solutions very efficiently when compared against recent state-of-the-art. The proposed algorithm can be applied for both convex and non-convex fields.",
"title": ""
},
{
"docid": "83f3c9f161b9871b59376ba4d415ebcc",
"text": "Much work has been done in understanding human creativity and defining measures to evaluate creativity. This is necessary mainly for the reason of having an objective and automatic way of quantifying creative artifacts. In this work, we propose a regression-based learning framework which takes into account quantitatively the essential criteria for creativity like novelty, influence, value and unexpectedness. As it is often the case with most creative domains, there is no clear ground truth available for creativity. Our proposed learning framework is applicable to all creative domains; yet we evaluate it on a dataset of movies created from IMDb and Rotten Tomatoes due to availability of audience and critic scores, which can be used as proxy ground truth labels for creativity. We report promising results and observations from our experiments in the following ways : 1) Correlation of creative criteria with critic scores, 2) Improvement in movie rating prediction with inclusion of various creative criteria, and 3) Identification of creative movies.",
"title": ""
},
{
"docid": "89432b112f153319d3a2a816c59782e3",
"text": "The Eyelink Toolbox software supports the measurement of eye movements. The toolbox provides an interface between a high-level interpreted language (MATLAB), a visual display programming toolbox (Psychophysics Toolbox), and a video-based eyetracker (Eyelink). The Eyelink Toolbox enables experimenters to measure eye movements while simultaneously executing the stimulus presentation routines provided by the Psychophysics Toolbox. Example programs are included with the toolbox distribution. Information on the Eyelink Toolbox can be found at http://psychtoolbox.org/.",
"title": ""
},
{
"docid": "fe3afe69ec27189400e65e8bdfc5bf0b",
"text": "speech learning changes over the life span and to explain why \"earlier is better\" as far as learning to pronounce a second language (L2) is concerned. An assumption we make is that the phonetic systems used in the production and perception of vowels and consonants remain adaptive over the life span, and that phonetic systems reorganize in response to sounds encountered in an L2 through the addition of new phonetic categories, or through the modification of old ones. The chapter is organized in the following way. Several general hypotheses concerning the cause of foreign accent in L2 speech production are summarized in the introductory section. In the next section, a model of L2 speech learning that aims to account for age-related changes in L2 pronunciation is presented. The next three sections present summaries of empirical research dealing with the production and perception of L2 vowels, word-initial consonants, and word-final consonants. The final section discusses questions of general theoretical interest, with special attention to a featural (as opposed to a segmental) level of analysis. Although nonsegmental (Le., prosodic) dimensions are an important source of foreign accent, the present chapter focuses on phoneme-sized units of speech. Although many different languages are learned as an L2, the focus is on the acquisition of English.",
"title": ""
},
{
"docid": "9fe198a6184a549ff63364e9782593d8",
"text": "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors/stocks and users/businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.",
"title": ""
},
{
"docid": "aadc952471ecd67d0c0731fa5a375872",
"text": "As the aircraft industry is moving towards the all electric and More Electric Aircraft (MEA), there is increase demand for electrical power in the aircraft. The trend in the aircraft industry is to replace hydraulic and pneumatic systems with electrical systems achieving more comfort and monitoring features. Moreover, the structure of MEA distribution system improves aircraft maintainability, reliability, flight safety and efficiency. Detailed descriptions of the modern MEA generation and distribution systems as well as the power converters and load types are explained and outlined. MEA electrical distribution systems are mainly in the form of multi-converter power electronic system.",
"title": ""
},
{
"docid": "189709296668a8dd6f7be8e1b2f2e40f",
"text": "Uncertain data management, querying and mining have become important because the majority of real world data is accompanied with uncertainty these days. Uncertainty in data is often caused by the deficiency in underlying data collecting equipments or sometimes manually introduced to preserve data privacy. This work discusses the problem of distance-based outlier detection on uncertain datasets of Gaussian distribution. The Naive approach of distance-based outlier on uncertain data is usually infeasible due to expensive distance function. Therefore a cell-based approach is proposed in this work to quickly identify the outliers. The infinite nature of Gaussian distribution prevents to devise effective pruning techniques. Therefore an approximate approach using bounded Gaussian distribution is also proposed. Approximating Gaussian distribution by bounded Gaussian distribution enables an approximate but more efficient cell-based outlier detection approach. An extensive empirical study on synthetic and real datasets show that our proposed approaches are effective, efficient and scalable.",
"title": ""
},
{
"docid": "c7e3fc9562a02818bba80d250241511d",
"text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.",
"title": ""
},
{
"docid": "4f7b6ad29f8a6cbe871ed5a6a9e75896",
"text": "Copyright: © 2017. The Author(s). Licensee: AOSIS. This work is licensed under the Creative Commons Attribution License. Introduction Glaucoma is an optic neuropathy that sometimes results in irreversible blindness.1 After cataracts, glaucoma is the second most prevalent cause of global blindness,2 and it is estimated that almost 80 million people worldwide will be affected by this optic neuropathy by the year 2020.3 Because of the high prevalence of this ocular disease, the economic and social implications of glaucoma have been outlined in recent studies.4,5 In Africa, primary open-angle glaucoma (POAG) is more prevalent than primary-angle closure glaucoma, and over the next 4 years, the prevalence of POAG in Africa is projected to increase by 23% corresponding to an increase from 6.2 million to 8.0 million affected individuals.3 Consequently, in Africa, there have been recommendations to incorporate glaucoma screening procedures into routine eye examinations as well as implement glaucoma blindness control programs.6,7",
"title": ""
},
{
"docid": "cd59460d293aa7ecbb9d7b96ed451b9a",
"text": "PURPOSE\nThe prevalence of work-related upper extremity musculoskeletal disorders and visual symptoms reported in the USA has increased dramatically during the past two decades. This study examined the factors of computer use, workspace design, psychosocial factors, and organizational ergonomics resources on musculoskeletal and visual discomfort and their impact on the safety and health of computer work employees.\n\n\nMETHODS\nA large-scale, cross-sectional survey was administered to a US manufacturing company to investigate these relationships (n = 1259). Associations between these study variables were tested along with moderating effects framed within a conceptual model.\n\n\nRESULTS\nSignificant relationships were found between computer use and psychosocial factors of co-worker support and supervisory relations with visual and musculoskeletal discomfort. Co-worker support was found to be significantly related to reports of eyestrain, headaches, and musculoskeletal discomfort. Supervisor relations partially moderated the relationship between workspace design satisfaction and visual and musculoskeletal discomfort.\n\n\nCONCLUSION\nThis study provides guidance for developing systematic, preventive measures and recommendations in designing office ergonomics interventions with the goal of reducing musculoskeletal and visual discomfort while enhancing office and computer workers' performance and safety.",
"title": ""
},
{
"docid": "f2395e705e84548186a57b2a199c1ddd",
"text": "Full-duplex technology is likely to be adopted in various legacy communications standards. The IEEE 802.11ax Working Group has been considering a simultaneous transmit and receive (STR) mode for the next generation WLANs. Enabling STR mode (FD communication mode) in 802.11 networks creates bidirectional FD (BFD) and unidirectional FD (UFD) links. The key challenge is to integrate STR mode with minimal protocol modifications, while considering the coexistence of FD and legacy half-duplex STAs and backward compatibility. This article proposes a simple and practical approach to enable STR mode in 802.11 networks with coexisting FD and HD STAs. The protocol explicitly accounts for the peculiarities of FD environments and backward compatibility. Key aspects of the proposed solution include FD capability discovery, a handshake mechanism for channel access, node selection for UFD transmission, adaptive ACK timeout for STAs engaged in BFD or UFD transmission, and mitigation of contention unfairness. Performance evaluation demonstrates the effectiveness of the proposed solution in realizing the gains of FD technology for next generation WLANs.",
"title": ""
},
{
"docid": "6cc203d16e715cbd71efdeca380f3661",
"text": "PURPOSE\nTo determine a population-based estimate of communication disorders (CDs) in children; the co-occurrence of intellectual disability (ID), autism, and emotional/behavioral disorders; and the impact of these conditions on the prevalence of CDs.\n\n\nMETHOD\nSurveillance targeted 8-year-olds born in 1994 residing in 2002 in the 3 most populous counties in Utah (n = 26,315). A multiple-source record review was conducted at all major health and educational facilities.\n\n\nRESULTS\nA total of 1,667 children met the criteria of CD. The prevalence of CD was estimated to be 63.4 per 1,000 8-year-olds (95% confidence interval = 60.4-66.2). The ratio of boys to girls was 1.8:1. Four percent of the CD cases were identified with an ID and 3.7% with autism spectrum disorders (ASD). Adjusting the CD prevalence to exclude ASD and/or ID cases significantly affected the CD prevalence rate. Other frequently co-occurring emotional/behavioral disorders with CD were attention deficit/hyperactivity disorder, anxiety, and conduct disorder.\n\n\nCONCLUSIONS\nFindings affirm that CDs and co-occurring mental health conditions are a major educational and public health concern.",
"title": ""
},
{
"docid": "951213cd4412570709fb34f437a05c72",
"text": "In this paper, we present directional skip-gram (DSG), a simple but effective enhancement of the skip-gram model by explicitly distinguishing left and right context in word prediction. In doing so, a direction vector is introduced for each word, whose embedding is thus learned by not only word co-occurrence patterns in its context, but also the directions of its contextual words. Theoretical and empirical studies on complexity illustrate that our model can be trained as efficient as the original skip-gram model, when compared to other extensions of the skip-gram model. Experimental results show that our model outperforms others on different datasets in semantic (word similarity measurement) and syntactic (partof-speech tagging) evaluations, respectively.",
"title": ""
},
{
"docid": "d88e4d9bba66581be16c9bd59d852a66",
"text": "After five decades characterized by empiricism and several pitfalls, some of the basic mechanisms of action of ozone in pulmonary toxicology and in medicine have been clarified. The present knowledge allows to understand the prolonged inhalation of ozone can be very deleterious first for the lungs and successively for the whole organism. On the other hand, a small ozone dose well calibrated against the potent antioxidant capacity of blood can trigger several useful biochemical mechanisms and reactivate the antioxidant system. In detail, firstly ex vivo and second during the infusion of ozonated blood into the donor, the ozone therapy approach involves blood cells and the endothelium, which by transferring the ozone messengers to billions of cells will generate a therapeutic effect. Thus, in spite of a common prejudice, single ozone doses can be therapeutically used in selected human diseases without any toxicity or side effects. Moreover, the versatility and amplitude of beneficial effect of ozone applications have become evident in orthopedics, cutaneous, and mucosal infections as well as in dentistry.",
"title": ""
},
{
"docid": "ee4416a05b955cdbd83b1819f0152665",
"text": "relative densities of pharmaceutical solids play an important role in determining their performance (e.g., flow and compaction properties) in both tablet and capsule dosage forms. In this article, the authors report the densities of a wide variety of solid pharmaceutical formulations and intermediates. The variance of density with chemical structure, processing history, and dosage-form type is significant. This study shows that density can be used as an equipment-independent scaling parameter for several common drug-product manufacturing operations. any physical responses of powders, granules, and compacts such as powder flow and tensile strength are determined largely by their absolute and relative densities (1–8). Although measuring these properties is a simple task, a review of the literature reveals that a combined source of density data that formulation scientists can refer to does not exist. The purpose of this article is to provide such a reference source and to give insight about how these critical properties can be measured for common pharmaceutical solids and how they can be used for monitoring common drugproduct manufacturing operations.",
"title": ""
},
{
"docid": "2a0b81bbe867a5936dafc323d8563970",
"text": "Social network analysis has gained significant attention in recent years, largely due to the success of online social networking and media-sharing sites, and the consequent availability of a wealth of social network data. In spite of the growing interest, however, there is little understanding of the potential business applications of mining social networks. While there is a large body of research on different problems and methods for social network mining, there is a gap between the techniques developed by the research community and their deployment in real-world applications. Therefore the potential business impact of these techniques is still largely unexplored.\n In this article we use a business process classification framework to put the research topics in a business context and provide an overview of what we consider key problems and techniques in social network analysis and mining from the perspective of business applications. In particular, we discuss data acquisition and preparation, trust, expertise, community structure, network dynamics, and information propagation. In each case we present a brief overview of the problem, describe state-of-the art approaches, discuss business application examples, and map each of the topics to a business process classification framework. In addition, we provide insights on prospective business applications, challenges, and future research directions. The main contribution of this article is to provide a state-of-the-art overview of current techniques while providing a critical perspective on business applications of social network analysis and mining.",
"title": ""
}
] |
scidocsrr
|
a8f3360c7be5cfacb5d0ef790526247a
|
Formalizing a Systematic Review Updating Process
|
[
{
"docid": "e79777797fa3cc1ef4650480a7344c40",
"text": "Synopsis A framework is presented which assists requirements engineers to choose methods for requirements acquisition. Practitioners are often unaware of the range of methods available. Even when practitioners are aware, most do not foresee the need to use several methods to acquire complete and accurate requirements. One reason for this is the lack of guidelines for method selection. The ACRE framework sets out to overcome these limitations. Method selection is achieved using questions driven from a set of facets which define the strengths and weaknesses of each method. The framework is presented as guidelines for requirements engineering practitioners. It has undergone some evaluation through its presentation to highly-experienced requirements engineers. Some results from this evaluation have been incorporated into the version of ACRE presented in the paper.",
"title": ""
}
] |
[
{
"docid": "ba7fe17912c942690c44bc81ce772c22",
"text": "[1] We present here a new InSAR persistent scatterer (PS) method for analyzing episodic crustal deformation in non-urban environments, with application to volcanic settings. Our method for identifying PS pixels in a series of interferograms is based primarily on phase characteristics and finds low-amplitude pixels with phase stability that are not identified by the existing amplitude-based algorithm. Our method also uses the spatial correlation of the phases rather than a well-defined phase history so that we can observe temporally-variable processes, e.g., volcanic deformation. The algorithm involves removing the residual topographic component of flattened interferogram phase for each PS, then unwrapping the PS phases both spatially and temporally. Our method finds scatterers with stable phase characteristics independent of amplitudes associated with man-made objects, and is applicable to areas where conventional InSAR fails due to complete decorrelation of the majority of scatterers, yet a few stable scatterers are present.",
"title": ""
},
{
"docid": "2536596ecba0498e7dbcb097695171b0",
"text": "How can we effectively encode evolving information over dynamic graphs into low-dimensional representations? In this paper, we propose DyRep – an inductive deep representation learning framework that learns a set of functions to efficiently produce low-dimensional node embeddings that evolves over time. The learned embeddings drive the dynamics of two key processes namely, communication and association between nodes in dynamic graphs. These processes exhibit complex nonlinear dynamics that evolve at different time scales and subsequently contribute to the update of node embeddings. We employ a time-scale dependent multivariate point process model to capture these dynamics. We devise an efficient unsupervised learning procedure and demonstrate that our approach significantly outperforms representative baselines on two real-world datasets for the problem of dynamic link prediction and event time prediction.",
"title": ""
},
{
"docid": "4b03aeb6c56cc25ce57282279756d1ff",
"text": "Weighted signed networks (WSNs) are networks in which edges are labeled with positive and negative weights. WSNs can capture like/dislike, trust/distrust, and other social relationships between people. In this paper, we consider the problem of predicting the weights of edges in such networks. We propose two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a node captures how fair the node is in rating other nodes' likeability or trust level. We provide axioms that these two notions need to satisfy and show that past work does not meet these requirements for WSNs. We provide a mutually recursive definition of these two concepts and prove that they converge to a unique solution in linear time. We use the two measures to predict the edge weight in WSNs. Furthermore, we show that when compared against several individual algorithms from both the signed and unsigned social network literature, our fairness and goodness metrics almost always have the best predictive power. We then use these as features in different multiple regression models and show that we can predict edge weights on 2 Bitcoin WSNs, an Epinions WSN, 2 WSNs derived from Wikipedia, and a WSN derived from Twitter with more accurate results than past work. Moreover, fairness and goodness metrics form the most significant feature for prediction in most (but not all) cases.",
"title": ""
},
{
"docid": "cf7b17b690258dc50ec12bfbd9de232d",
"text": "In this paper, we propose a novel method for visual object tracking called HMMTxD. The method fuses observations from complementary out-of-the box trackers and a detector by utilizing a hidden Markov model whose latent states correspond to a binary vector expressing the failure of individual trackers. The Markov model is trained in an unsupervised way, relying on an online learned detector to provide a source of tracker-independent information for a modified BaumWelch algorithm that updates the model w.r.t. the partially annotated data. We show the effectiveness of the proposed method on combination of two and three tracking algorithms. The performance of HMMTxD is evaluated on two standard benchmarks (CVPR2013 and VOT) and on a rich collection of 77 publicly available sequences. The HMMTxD outperforms the state-of-the-art, often significantly, on all datasets in almost all criteria.",
"title": ""
},
{
"docid": "bdb41d1633c603f4b68dfe0191eb822b",
"text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.",
"title": ""
},
{
"docid": "07817eb2722fb434b1b8565d936197cf",
"text": "We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.",
"title": ""
},
{
"docid": "ba314edceb1b8ac00f94ad0037bd5b8e",
"text": "AMS subject classifications: primary 62G10 secondary 62H20 Keywords: dCor dCov Multivariate independence Distance covariance Distance correlation High dimension a b s t r a c t Distance correlation is extended to the problem of testing the independence of random vectors in high dimension. Distance correlation characterizes independence and determines a test of multivariate independence for random vectors in arbitrary dimension. In this work, a modified distance correlation statistic is proposed, such that under independence the distribution of a transformation of the statistic converges to Student t, as dimension tends to infinity. Thus we obtain a distance correlation t-test for independence of random vectors in arbitrarily high dimension, applicable under standard conditions on the coordinates that ensure the validity of certain limit theorems. This new test is based on an unbiased es-timator of distance covariance, and the resulting t-test is unbiased for every sample size greater than three and all significance levels. The transformed statistic is approximately normal under independence for sample size greater than nine, providing an informative sample coefficient that is easily interpretable for high dimensional data. 1. Introduction Many applications in genomics, medicine, engineering, etc. require analysis of high dimensional data. Time series data can also be viewed as high dimensional data. Objects can be represented by their characteristics or features as vectors p. In this work, we consider the extension of distance correlation to the problem of testing independence of random vectors in arbitrarily high, not necessarily equal dimensions, so the dimension p of the feature space of a random vector is typically large. measure all types of dependence between random vectors in arbitrary, not necessarily equal dimensions. (See Section 2 for definitions.) Distance correlation takes values in [0, 1] and is equal to zero if and only if independence holds. It is more general than the classical Pearson product moment correlation, providing a scalar measure of multivariate independence that characterizes independence of random vectors. The distance covariance test of independence is consistent against all dependent alternatives with finite second moments. In practice, however, researchers are often interested in interpreting the numerical value of distance correlation, without a formal test. For example, given an array of distance correlation statistics, what can one learn about the strength of dependence relations from the dCor statistics without a formal test? This is in fact, a difficult question, but a solution is finally available for a large class of problems. The …",
"title": ""
},
{
"docid": "4eeb20c4a5cc259be1355b04813223f6",
"text": "Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout’s training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently.",
"title": ""
},
{
"docid": "b1d348e2095bd7054cc11bd84eb8ccdc",
"text": "Epidermolysis bullosa (EB) is a group of inherited, mechanobullous disorders caused by mutations in various structural proteins in the skin. There have been several advances in the classification of EB since it was first introduced in the late 19th century. We now recognize four major types of EB, depending on the location of the target proteins and level of the blisters: EB simplex (epidermolytic), junctional EB (lucidolytic), dystrophic EB (dermolytic), and Kindler syndrome (mixed levels of blistering). This contribution will summarize the most recent classification and discuss the molecular basis, target genes, and proteins involved. We have also included new subtypes, such as autosomal dominant junctional EB and autosomal recessive EB due to mutations in the dystonin (DST) gene, which encodes the epithelial isoform of bullouspemphigoid antigen 1. The main laboratory diagnostic techniques-immunofluorescence mapping, transmission electron microscopy, and mutation analysis-will also be discussed. Finally, the clinical characteristics of the different major EB types and subtypes will be reviewed.",
"title": ""
},
{
"docid": "cf374e1d1fa165edaf0b29749f32789c",
"text": "Photovoltaic (PV) system performance extremely depends on local insolation and temperature conditions. Under partial shading, P-I characteristics of PV systems are complicated and may have multiple local maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily fail to track global maxima and may be trapped in local maxima under partial shading; this can be one of main causes for reduced energy yield for many PV systems. In order to solve this problem, this paper proposes a novel Maximum Power Point tracking algorithm based on Differential Evolution (DE) that is capable of tracking global MPP under partial shaded conditions. The ability of proposed algorithm and its excellent performances are evaluated with conventional and popular algorithm by means of simulation. The proposed algorithm works in conjunction with a Boost (step up) DC-DC converter to track the global peak. Moreover, this paper includes a MATLAB-based modeling and simulation scheme suitable for photovoltaic characteristics under partial shading.",
"title": ""
},
{
"docid": "7bbffa53f71207f0f218a09f18586541",
"text": "Myelotoxicity induced by chemotherapy may become life-threatening. Neutropenia may be prevented by granulocyte colony-stimulating factors (GCSF), and epoetin may prevent anemia, but both cause substantial side effects and increased costs. According to non-established data, wheat grass juice (WGJ) may prevent myelotoxicity when applied with chemotherapy. In this prospective matched control study, 60 patients with breast carcinoma on chemotherapy were enrolled and assigned to an intervention or control arm. Those in the intervention arm (A) were given 60 cc of WGJ orally daily during the first three cycles of chemotherapy, while those in the control arm (B) received only regular supportive therapy. Premature termination of treatment, dose reduction, and starting GCSF or epoetin were considered as \"censoring events.\" Response rate to chemotherapy was calculated in patients with evaluable disease. Analysis of the results showed that five censoring events occurred in Arm A and 15 in Arm B (P = 0.01). Of the 15 events in Arm B, 11 were related to hematological events. No reduction in response rate was observed in patients who could be assessed for response. Side effects related to WGJ were minimal, including worsening of nausea in six patients, causing cessation of WGJ intake. In conclusion, it was found that WGJ taken during FAC chemotherapy may reduce myelotoxicity, dose reductions, and need for GCSF support, without diminishing efficacy of chemotherapy. These preliminary results need confirmation in a phase III study.",
"title": ""
},
{
"docid": "b9e7fedbc42f815b35351ec9a0c31b33",
"text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs ",
"title": ""
},
{
"docid": "af8fbdfbc4c4958f69b3936ff2590767",
"text": "Analysis of sedimentary diatom assemblages (10 to 144 ka) form the basis for a detailed reconstruction of the paleohydrography and diatom paleoecology of Lake Malawi. Lake-level fluctuations on the order of hundreds of meters were inferred from dramatic changes in the fossil and sedimentary archives. Many of the fossil diatom assemblages we observed have no analog in modern Lake Malawi. Cyclotelloid diatom species are a major component of fossil assemblages prior to 35 ka, but are not found in significant abundances in the modern diatom communities in Lake Malawi. Salinityand alkalinity-tolerant plankton has not been reported in the modern lake system, but frequently dominant fossil diatom assemblages prior to 85 ka. Large stephanodiscoid species that often dominate the plankton today are rarely present in the fossil record prior to 31 ka. Similarly, prior to 31 ka, common central-basin aulacoseiroid species are replaced by species found in the shallow, well-mixed southern basin. Surprisingly, tychoplankton and periphyton were not common throughout prolonged lowstands, but tended to increase in relative abundance during periods of inferred deeper-lake environments. A high-resolution lake level reconstruction was generated by a principle component analysis of fossil diatom and wetsieved fossil and mineralogical residue records. Prior to 70 ka, fossil assemblages suggest that the central basin was periodically a much shallower, more saline and/or alkaline, well-mixed environment. The most significant reconstructed lowstands are ~ 600 m below the modern lake level and span thousands of years. These conditions contrast starkly with the deep, dilute, dysaerobic environments of the modern central basin. After 70 ka, our reconstruction indicates sustained deeper-water environments were common, marked by a few brief, but significant, lowstands. High amplitude lake-level fluctuations appear related to changes in insolation. Seismic reflection data and additional sediment cores recovered from the northern basin of Lake Malawi provide evidence that supports our reconstruction.",
"title": ""
},
{
"docid": "3d490d7d30dcddc3f1c0833794a0f2df",
"text": "Purpose-This study attempts to investigate (1) the effect of meditation experience on employees’ self-directed learning (SDL) readiness and organizational innovative (OI) ability as well as organizational performance (OP), and (2) the relationships among SDL, OI, and OP. Design/methodology/approach-This study conducts an empirical study of 15 technological companies (n = 412) in Taiwan, utilizing the collected survey data to test the relationships among the three dimensions. Findings-Results show that: (1) The employees’ meditation experience significantly and positively influenced employees’ SDL readiness, companies’ OI capability and OP; (2) The study found that SDL has a direct and significant impact on OI; and OI has direct and significant influences on OP. Research limitation/implications-The generalization of the present study is constrained by (1) the existence of possible biases of the participants, (2) the variations of length, type and form of meditation demonstrated by the employees in these high tech companies, and (3) the fact that local data collection in Taiwan may present different cultural characteristics which may be quite different from those in other areas or countries. Managerial implications are presented at the end of the work. Practical implications-The findings indicate that SDL can only impact organizational innovation through employees “openness to a challenge”, “inquisitive nature”, self-understanding and acceptance of responsibility for learning. Such finding implies better organizational innovative capability under such conditions, thus organizations may encourage employees to take risks or accept new opportunities through various incentives, such as monetary rewards or public recognitions. More specifically, the present study discovers that while administration innovation is the most important element influencing an organization’s financial performance, market innovation is the key component in an organization’s market performance. Social implications-The present study discovers that meditation experience positively",
"title": ""
},
{
"docid": "c7eb67093a6f00bec0d96607e6384378",
"text": "Two primary simulations have been developed and are being updated for the Mars Smart Lander Entry, Descent, and Landing (EDL). The high fidelity engineering end-to-end EDL simulation that is based on NASA Langley’s Program to Optimize Simulated Trajectories (POST) and the end-to-end real-time, hardware-in-the-loop simulation test bed, which is based on NASA JPL’s Dynamics Simulator for Entry, Descent and Surface landing (DSENDS). This paper presents the status of these Mars Smart Lander EDL end-to-end simulations at this time. Various models, capabilities, as well as validation and verification for these simulations are discussed.",
"title": ""
},
{
"docid": "046148901452aefdc5a14357ed89cbd3",
"text": "Of late, there has been an avalanche of cross-layer design proposals for wireless networks. A number of researchers have looked at specific aspects of network performance and, approaching cross-layer design via their interpretation of what it implies, have presented several cross-layer design proposals. These proposals involve different layers of the protocol stack, and address both cellular and ad hoc networks. There has also been work relating to the implementation of cross-layer interactions. It is high time that these various individual efforts be put into perspective and a more holistic view be taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of cross-layer design, and by taking stock of the ongoing work. We suggest a definition for cross-layer design, discuss the basic types of cross-layer design with examples drawn from the literature, and categorize the initial proposals on how cross-layer interactions may be implemented. We then highlight some open challenges and new opportunities for cross-layer design. Designers presenting cross-layer design proposals can start addressing these as they move ahead.",
"title": ""
},
{
"docid": "5462d51955d2eaaa25fd6ff4d71b3f40",
"text": "2 \"Generations of scientists may yet have to come and go before the question of the origin of life is finally solved. That it will be solved eventually is as certain as anything can ever be amid the uncertainties that surround us.\" 1. Introduction How, where and when did life appear on Earth? Although Charles Darwin was reluctant to address these issues in his books, in a letter sent on February 1st, 1871 to his friend Joseph Dalton Hooker he wrote in a now famous paragraph that \"it is often said that all the conditions for the first production of a living being are now present, which could ever have been present. But if (and oh what a big if) we could conceive in some warm little pond with all sort of ammonia and phosphoric salts,-light, heat, electricity present, that a protein compound was chemically formed, ready to undergo still more complex changes, at the present such matter would be instantly devoured, or absorbed, which would not have been the case before living creatures were formed...\" (Darwin, 1871). Darwin's letter summarizes in a nutshell not only his ideas on the emergence of life, but also provides considerable insights on the views on the chemical nature of the basic biological processes that were prevalent at the time in many scientific circles. Although Friedrich Miescher had discovered nucleic acids (he called them nuclein) in 1869 (Dahm, 2005), the deciphering of their central role in genetic processes would remain unknown for almost another a century. In contrast, the roles played by proteins in manifold biological processes had been established. Equally significant, by the time Darwin wrote his letter major advances had been made in the understanding of the material basis of life, which for a long time had been considered to be fundamentally different from inorganic compounds. The experiments of Friedrich Wöhler, Adolph Strecker and Aleksandr Butlerov, who had demonstrated independently the feasibility of the laboratory synthesis of urea, alanine, and sugars, respectively, from simple 3 starting materials were recognized as a demonstration that the chemical gap separating organisms from the non-living was not insurmountable. But how had this gap first been bridged? The idea that life was an emergent feature of nature has been widespread since the nineteenth century. The major breakthrough that transformed the origin of life from pure speculation into workable and testable research models were proposals, suggested independently, in …",
"title": ""
},
{
"docid": "c273620e05cc5131e8c6d58b700a0aab",
"text": "Differential evolution has been shown to be an effective methodology for solving optimization problems over continuous space. In this paper, we propose an eigenvector-based crossover operator. The proposed operator utilizes eigenvectors of covariance matrix of individual solutions, which makes the crossover rotationally invariant. More specifically, the donor vectors during crossover are modified, by projecting each donor vector onto the eigenvector basis that provides an alternative coordinate system. The proposed operator can be applied to any crossover strategy with minimal changes. The experimental results show that the proposed operator significantly improves DE performance on a set of 54 test functions in CEC 2011, BBOB 2012, and CEC 2013 benchmark sets.",
"title": ""
},
{
"docid": "7a1f409eea5e0ff89b51fe0a26d6db8d",
"text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.",
"title": ""
},
{
"docid": "c68196f826f2afb61c13a0399d921421",
"text": "BACKGROUND\nIndividuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level.\n\n\nMETHODS\nUsing baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework.\n\n\nRESULTS\nCognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions.\n\n\nCONCLUSIONS\nWe developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment.",
"title": ""
}
] |
scidocsrr
|
5355b9be7a88b959ad05750fb5aa10ba
|
Supervised Learning with Quantum-Inspired Tensor Networks
|
[
{
"docid": "5d247482bb06e837bf04c04582f4bfa2",
"text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.",
"title": ""
}
] |
[
{
"docid": "4995bb31547a98adbe98c7a9f2bfa947",
"text": "This paper describes our proposed solutions designed for a STS core track within the SemEval 2016 English Semantic Textual Similarity (STS) task. Our method of similarity detection combines recursive autoencoders with a WordNet award-penalty system that accounts for semantic relatedness, and an SVM classifier, which produces the final score from similarity matrices. This solution is further supported by an ensemble classifier, combining an aligner with a bi-directional Gated Recurrent Neural Network and additional features, which then performs Linear Support Vector Regression to determine another set of scores.",
"title": ""
},
{
"docid": "69a11f89a92051631e1c07f2af475843",
"text": "Animal-assisted therapy (AAT) has been practiced for many years and there is now increasing interest in demonstrating its efficacy through research. To date, no known quantitative review of AAT studies has been published; our study sought to fill this gap. We conducted a comprehensive search of articles reporting on AAT in which we reviewed 250 studies, 49 of which met our inclusion criteria and were submitted to meta-analytic procedures. Overall, AAT was associated with moderate effect sizes in improving outcomes in four areas: Autism-spectrum symptoms, medical difficulties, behavioral problems, and emotional well-being. Contrary to expectations, characteristics of participants and studies did not produce differential outcomes. AAT shows promise as an additive to established interventions and future research should investigate the conditions under which AAT can be most helpful.",
"title": ""
},
{
"docid": "34f94f47de9329595f6b4a49139310a9",
"text": "The powerful data storage and data processing abilities of cloud computing (CC) and the ubiquitous data gathering capability of wireless sensor network (WSN) complement each other in CC-WSN integration, which is attracting growing interest from both academia and industry. However, job scheduling for CC integrated with WSN is a critical and unexplored topic. To fill this gap, this paper first analyzes the characteristics of job scheduling with respect to CC-WSN integration and then studies two traditional and popular job scheduling algorithms (i.e., Min-Min and Max-Min). Further, two novel job scheduling algorithms, namely priority-based two phase Min-Min (PTMM) and priority-based two phase Max-Min (PTAM), are proposed for CC integrated with WSN. Extensive experimental results show that PTMM and PTAM achieve shorter expected completion time than Min-Min and Max-Min, for CC integrated with WSN.",
"title": ""
},
{
"docid": "35225f6ca92daf5b17bdd2a5395b83ca",
"text": "A neural network with a single layer of hidden units of gaussian type is proved to be a universal approximator for real-valued maps defined on convex, compact sets of Rn.",
"title": ""
},
{
"docid": "0f7f8557ffa238a529f28f9474559cc4",
"text": "Fast incipient machine fault diagnosis is becoming one of the key requirements for economical and optimal process operation management. Artificial neural networks have been used to detect machine faults for a number of years and shown to be highly successful in this application area. This paper presents a novel test technique for machine fault detection and classification in electro-mechanical machinery from vibration measurements using one-class support vector machines (SVMs). In order to evaluate one-class SVMs, this paper examines the performance of the proposed method by comparing it with that of multilayer perception, one of the artificial neural network techniques, based on real benchmarking data. q 2005 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "21916d34fb470601fb6376c4bcd0839a",
"text": "BACKGROUND\nCutibacterium (Propionibacterium) acnes is assumed to play an important role in the pathogenesis of acne.\n\n\nOBJECTIVES\nTo examine if clones with distinct virulence properties are associated with acne.\n\n\nMETHODS\nMultiple C. acnes isolates from follicles and surface skin of patients with moderate to severe acne and healthy controls were characterized by multilocus sequence typing. To determine if CC18 isolates from acne patients differ from those of controls in the possession of virulence genes or lack of genes conducive to a harmonious coexistence the full genomes of dominating CC18 follicular clones from six patients and five controls were sequenced.\n\n\nRESULTS\nIndividuals carried one to ten clones simultaneously. The dominating C. acnes clones in follicles from acne patients were exclusively from the phylogenetic clade I-1a and all belonged to clonal complex CC18 with the exception of one patient dominated by the worldwide-disseminated and often antibiotic resistant clone ST3. The clonal composition of healthy follicles showed a more heterogeneous pattern with follicles dominated by clones representing the phylogenetic clades I-1a, I-1b, I-2 and II. Comparison of follicular CC18 gene contents, allelic versions of putative virulence genes and their promoter regions, and 54 variable-length intragenic and inter-genic homopolymeric tracts showed extensive conservation and no difference associated with the clinical origin of isolates.\n\n\nCONCLUSIONS\nThe study supports that C. acnes strains from clonal complex CC18 and the often antibiotic resistant clone ST3 are associated with acne and suggests that susceptibility of the host rather than differences within these clones may determine the clinical outcome of colonization.",
"title": ""
},
{
"docid": "b82805187bdfd14a4dd5efc6faf70f10",
"text": "8 Cloud computing has gained tremendous popularity in recent years. By outsourcing computation and 9 storage requirements to public providers and paying for the services used, customers can relish upon the 10 advantages of this new paradigm. Cloud computing provides with a comparably lower-cost, scalable, a 11 location-independent platform for managing clients’ data. Compared to a traditional model of computing, 12 which uses dedicated in-house infrastructure, cloud computing provides unprecedented benefits regarding 13 cost and reliability. Cloud storage is a new cost-effective paradigm that aims at providing high 14 availability, reliability, massive scalability and data sharing. However, outsourcing data to a cloud service 15 provider introduces new challenges from the perspectives of data correctness and security. Over the years, 16 many data integrity schemes have been proposed for protecting outsourced data. This paper aims to 17 enhance the understanding of security issues associated with cloud storage and highlights the importance 18 of data integrity schemes for outsourced data. In this paper, we have presented a taxonomy of existing 19 data integrity schemes use for cloud storage. A comparative analysis of existing schemes is also provided 20 along with a detailed discussion on possible security attacks and their mitigations. Additionally, we have 21 discussed design challenges such as computational efficiency, storage efficiency, communication 22 efficiency, and reduced I/O in these schemes. Furthermore; we have highlighted future trends and open 23 issues, for future research in cloud storage security. 24",
"title": ""
},
{
"docid": "cad2d29b9f51bbd146c5b683208cf3fa",
"text": "The stereotype content model (SCM) defines two fundamental dimensions of social perception, warmth and competence, predicted respectively by perceived competition and status. Combinations of warmth and competence generate distinct emotions of admiration, contempt, envy, and pity. From these intergroup emotions and stereotypes, the behavior from intergroup affect and stereotypes (BIAS) map predicts distinct behaviors: active and passive, facilitative and harmful. After defining warmth/communion and competence/agency, the chapter integrates converging work documenting the centrality of these dimensions in interpersonal as well as intergroup perception. Structural origins of warmth and competence perceptions result from competitors judged as not warm, and allies judged as warm; high status confers competence and low status incompetence. Warmth and competence judgments support systematic patterns of cognitive, emotional, and behavioral reactions, including ambivalent prejudices. Past views of prejudice as a univalent antipathy have obscured the unique responses toward groups stereotyped as competent but not warm or warm but not competent. Finally, the chapter addresses unresolved issues and future research directions.",
"title": ""
},
{
"docid": "76d22feb7da3dbc14688b0d999631169",
"text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.",
"title": ""
},
{
"docid": "4e91d37de7701e4a03c506c602ef3455",
"text": "This paper presents the design of Glow, a machine learning compiler for heterogeneous hardware. It is a pragmatic approach to compilation that enables the generation of highly optimized code for multiple targets. Glow lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation. The high-level intermediate representation allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only intermediate representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives.",
"title": ""
},
{
"docid": "80f098f2cee2f0cef196c946ba93cb99",
"text": "In this paper we propose a new approach to incrementally initialize a manifold surface for automatic 3D reconstruction from images. More precisely we focus on the automatic initialization of a 3D mesh as close as possible to the final solution; indeed many approaches require a good initial solution for further refinement via multi-view stereo techniques. Our novel algorithm automatically estimates an initial manifold mesh for surface evolving multi-view stereo algorithms, where the manifold property needs to be enforced. It bootstraps from 3D points extracted via Structure from Motion, then iterates between a state-of-the-art manifold reconstruction step and a novel mesh sweeping algorithm that looks for new 3D points in the neighborhood of the reconstructed manifold to be added in the manifold reconstruction. The experimental results show quantitatively that the mesh sweeping improves the resolution and the accuracy of the manifold reconstruction, allowing a better convergence of state-of-the-art surface evolution multi-view stereo algorithms.",
"title": ""
},
{
"docid": "60ea79b98eade6b3a7bcd786484aa063",
"text": "This paper analyses the effect of adding Bitcoin, to the portfolio (stocks, bonds, Baltic index, MXEF, gold, real estate and crude oil) of an international investor by using daily data available from 2nd of July, 2010 to 2nd of August, 2016. We conclude that adding Bitcoin to portfolio, over the course of the considered period, always yielded a higher Sharpe ratio. This means that Bitcoin’s returns offset its high volatility. This paper, recognizing the fact that Bitcoin is a relatively new asset class, gives the readers a basic idea about the working of the virtual currency, the increasing number developments in the financial industry revolving around it, its unique features and the detailed look into its continuously growing acceptance across different fronts (Banks, Merchants and Countries) globally. We also construct optimal portfolios to reflect the highly lucrative and largely unexplored opportunities associated with investment in Bitcoin. Keywords—Portfolio management, Bitcoin, optimization, Sharpe ratio.",
"title": ""
},
{
"docid": "c023633ca0fe1cfc78b1d579d1ae157b",
"text": "A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally motivated work behavior to develop; (b) the characteristics of jobs that can create these psychological states; and (c) the attributes of individuals that determine how positively a person will respond to a complex and challenging job. The model was tested for 658 employees who work on 62 different jobs in seven organizations, and results support its validity. A number of special features of the model are discussed (including its use as a basis for the diagnosis of jobs and the evaluation of job redesign projects), and the model is compared to other theories of job design.",
"title": ""
},
{
"docid": "324d5709b5638a06170a703e88732458",
"text": "Finding the most influential people is an NP-hard problem that has attracted many researchers in the field of social networks. The problem is also known as influence maximization and aims to find a number of people that are able to maximize the spread of influence through a target social network. In this paper, a new algorithm based on the linear threshold model of influence maximization is proposed. The main benefit of the algorithm is that it reduces the number of investigated nodes without loss of quality to decrease its execution time. Our experimental results based on two well-known datasets show that the proposed algorithm is much faster and at the same time more efficient than the state of the art",
"title": ""
},
{
"docid": "f6e8bda7c3915fa023f1b0f88f101f46",
"text": "This paper presents a formulation to the obstacle avoidance problem for semi-autonomous ground vehicles. The planning and tracking problems have been divided into a two-level hierarchical controller. The high level solves a nonlinear model predictive control problem to generate a feasible and obstacle free path. It uses a nonlinear vehicle model and utilizes a coordinate transformation which uses vehicle position along a path as the independent variable. The low level uses a higher fidelity model and solves the MPC problem with a sequential quadratic programming approach to track the planned path. Simulations show the method’s ability to safely avoid multiple obstacles while tracking the lane centerline. Experimental tests on a semi-autonomous passenger vehicle driving at high speed on ice show the effectiveness of the approach.",
"title": ""
},
{
"docid": "490d63de99f1973d5bab4c1a90633d18",
"text": "Flows transported across mobile ad hoc wireless networks suffer from route breakups caused by nodal mobility. In a network that aims to support critical interactive real-time data transactions, to provide for the uninterrupted execution of a transaction, or for the rapid transport of a high value file, it is essential to identify robust routes across which such transactions are transported. Noting that route failures can induce long re-routing delays that may be highly interruptive for many applications and message/stream transactions, it is beneficial to configure the routing scheme to send a flow across a route whose lifetime is longer, with sufficiently high probability, than the estimated duration of the activity that it is selected to carry. We evaluate the ability of a mobile ad hoc wireless network to distribute flows across robust routes by introducing the robust throughput measure as a performance metric. The utility gained by the delivery of flow messages is based on the level of interruption experienced by the underlying transaction. As a special case, for certain applications only transactions that are completed without being prematurely interrupted may convey data to their intended users that is of acceptable utility. We describe the mathematical calculation of a network’s robust throughput measure, as well as its robust throughput capacity. We introduce the robust flow admission and routing algorithm (RFAR) to provide for the timely and robust transport of flow transactions across mobile ad hoc wireless net-",
"title": ""
},
{
"docid": "e3a412a62d5e6a253158e2eba9b0fd05",
"text": "Colorectal cancer (CRC) is one of the most common cancers in the western world and is characterised by deregulation of the Wnt signalling pathway. Mutation of the adenomatous polyposis coli (APC) tumour suppressor gene, which encodes a protein that negatively regulates this pathway, occurs in almost 80% of CRC cases. The progression of this cancer from an early adenoma to carcinoma is accompanied by a well-characterised set of mutations including KRAS, SMAD4 and TP53. Using elegant genetic models the current paradigm is that the intestinal stem cell is the origin of CRC. However, human histology and recent studies, showing marked plasticity within the intestinal epithelium, may point to other cells of origin. Here we will review these latest studies and place these in context to provide an up-to-date view of the cell of origin of CRC.",
"title": ""
},
{
"docid": "cc12bd6dcd844c49c55f4292703a241b",
"text": "Eleven cases of sudden death of men restrained in a prone position by police officers are reported. Nine of the men were hogtied, one was tied to a hospital gurney, and one was manually held prone. All subjects were in an excited delirious state when restrained. Three were psychotic, whereas the others were acutely delirious from drugs (six from cocaine, one from methamphetamine, and one from LSD). Two were shocked with stun guns shortly before death. The literature is reviewed and mechanisms of death are discussed.",
"title": ""
},
{
"docid": "75e794b731685064820c79f4d68ed79b",
"text": "Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to implicitly indicate groups. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. We discuss results from evaluations of those techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.",
"title": ""
},
{
"docid": "f5c4c25286eb419eb8f7100702062180",
"text": "The primary objective of this investigation was to quantitatively identify which training variables result in the greatest strength and hypertrophy outcomes with lower body low intensity training with blood flow restriction (LI-BFR). Searches were performed for published studies with certain criteria. First, the primary focus of the study must have compared the effects of low intensity endurance or resistance training alone to low intensity exercise with some form of blood flow restriction. Second, subject populations had to have similar baseline characteristics so that valid outcome measures could be made. Finally, outcome measures had to include at least one measure of muscle hypertrophy. All studies included in the analysis utilized MRI except for two which reported changes via ultrasound. The mean overall effect size (ES) for muscle strength for LI-BFR was 0.58 [95% CI: 0.40, 0.76], and 0.00 [95% CI: −0.18, 0.17] for low intensity training. The mean overall ES for muscle hypertrophy for LI-BFR training was 0.39 [95% CI: 0.35, 0.43], and −0.01 [95% CI: −0.05, 0.03] for low intensity training. Blood flow restriction resulted in significantly greater gains in strength and hypertrophy when performed with resistance training than with walking. In addition, performing LI-BFR 2–3 days per week resulted in the greatest ES compared to 4–5 days per week. Significant correlations were found between ES for strength development and weeks of duration, but not for muscle hypertrophy. This meta-analysis provides insight into the impact of different variables on muscular strength and hypertrophy to LI-BFR training.",
"title": ""
}
] |
scidocsrr
|
ba56c243a11d96c06eb434c155b3da59
|
The Evolution of the Platform Concept: A Systematic Review
|
[
{
"docid": "4bfb389e1ae2433f797458ff3fe89807",
"text": "Many if not most markets with network externalities are two-sided. To succeed, platforms in industries such as software, portals and media, payment systems and the Internet, must “get both sides of the market on board ”. Accordingly, platforms devote much attention to their business model, that is to how they court each side while making money overall. The paper builds a model of platform competition with two-sided markets. It unveils the determinants of price allocation and enduser surplus for different governance structures (profit-maximizing platforms and not-for-profit joint undertakings), and compares the outcomes with those under an integrated monopolist and a Ramsey planner.",
"title": ""
},
{
"docid": "4ab8913fff86d8a737ed62c56fe2b39d",
"text": "This paper draws on the social and behavioral sciences in an endeavor to specify the nature and microfoundations of the capabilities necessary to sustain superior enterprise performance in an open economy with rapid innovation and globally dispersed sources of invention, innovation, and manufacturing capability. Dynamic capabilities enable business enterprises to create, deploy, and protect the intangible assets that support superior longrun business performance. The microfoundations of dynamic capabilities—the distinct skills, processes, procedures, organizational structures, decision rules, and disciplines—which undergird enterprise-level sensing, seizing, and reconfiguring capacities are difficult to develop and deploy. Enterprises with strong dynamic capabilities are intensely entrepreneurial. They not only adapt to business ecosystems, but also shape them through innovation and through collaboration with other enterprises, entities, and institutions. The framework advanced can help scholars understand the foundations of long-run enterprise success while helping managers delineate relevant strategic considerations and the priorities they must adopt to enhance enterprise performance and escape the zero profit tendency associated with operating in markets open to global competition. Copyright 2007 John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "597e00855111c6ccb891c96e28f23585",
"text": "Global food demand is increasing rapidly, as are the environmental impacts of agricultural expansion. Here, we project global demand for crop production in 2050 and evaluate the environmental impacts of alternative ways that this demand might be met. We find that per capita demand for crops, when measured as caloric or protein content of all crops combined, has been a similarly increasing function of per capita real income since 1960. This relationship forecasts a 100-110% increase in global crop demand from 2005 to 2050. Quantitative assessments show that the environmental impacts of meeting this demand depend on how global agriculture expands. If current trends of greater agricultural intensification in richer nations and greater land clearing (extensification) in poorer nations were to continue, ~1 billion ha of land would be cleared globally by 2050, with CO(2)-C equivalent greenhouse gas emissions reaching ~3 Gt y(-1) and N use ~250 Mt y(-1) by then. In contrast, if 2050 crop demand was met by moderate intensification focused on existing croplands of underyielding nations, adaptation and transfer of high-yielding technologies to these croplands, and global technological improvements, our analyses forecast land clearing of only ~0.2 billion ha, greenhouse gas emissions of ~1 Gt y(-1), and global N use of ~225 Mt y(-1). Efficient management practices could substantially lower nitrogen use. Attainment of high yields on existing croplands of underyielding nations is of great importance if global crop demand is to be met with minimal environmental impacts.",
"title": ""
},
{
"docid": "d0a2c8cf31e1d361a7c2b306dffddc25",
"text": "During the first years of the so called fourth industrial revolution, main attempts that tried to define the main ideas and tools behind this new era of manufacturing, always end up referring to the concept of smart machines that would be able to communicate with each and with the environment. In fact, the defined cyber physical systems, connected by the internet of things, take all the attention when referring to the new industry 4.0. But, nevertheless, the new industrial environment will benefit from several tools and applications that complement the real formation of a smart, embedded system that is able to perform autonomous tasks. And most of these revolutionary concepts rest in the same background theory as artificial intelligence does, where the analysis and filtration of huge amounts of incoming information from different types of sensors, assist to the interpretation and suggestion of the most recommended course of action. For that reason, artificial intelligence science suit perfectly with the challenges that arise in the consolidation of the fourth industrial revolution.",
"title": ""
},
{
"docid": "d2abcdcdb6650c30838507ec1521b263",
"text": "Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.",
"title": ""
},
{
"docid": "6b49441def46e13e7289a49a6a615e8d",
"text": "In the present research, the authors investigated the impact of self-regulation resources on confirmatory information processing, that is, the tendency of individuals to systematically prefer standpoint-consistent information to standpoint-inconsistent information in information evaluation and search. In 4 studies with political and economic decision-making scenarios, it was consistently found that individuals with depleted self-regulation resources exhibited a stronger tendency for confirmatory information processing than did individuals with nondepleted self-regulation resources. Alternative explanations based on processes of ego threat, cognitive load, and mood were ruled out. Mediational analyses suggested that individuals with depleted self-regulation resources experienced increased levels of commitment to their own standpoint, which resulted in increased confirmatory information processing. In sum, the impact of ego depletion on confirmatory information search seems to be more motivational than cognitive in nature.",
"title": ""
},
{
"docid": "027681fed6a8932935ea8ef9e49cea13",
"text": "Nowadays smartphones are ubiquitous and - to some extent - already used to support sports training, e.g. runners or bikers track their trip with a gps-enabled smartphone. But recent mobile technology has powerful processors that allow even more complex tasks like image or graphics processing. In this work we address the question on how mobile technology can be used for collaborative boulder training. More specifically, we present a mobile augmented reality application to support various parts of boulder training. The proposed approach also incorporates sharing and other social features. Thus our solution supports collaborative training by providing an intuitive way to create, share and define goals and challenges together with friends. Furthermore we propose a novel method of trackable generation for augmented reality. Synthetically generated images of climbing walls are used as trackables for real, existing walls.",
"title": ""
},
{
"docid": "cedc00b6b92dc47d7480e51a146affe8",
"text": "We propose a new scheme for detecting and localizing the abnormal crowd behavior in video sequences. The proposed method starts from the assumption that the interaction force, as estimated by the Social Force Model (SFM), is a significant feature to analyze crowd behavior. We step forward this hypothesis by optimizing this force using Particle Swarm Optimization (PSO) to perform the advection of a particle population spread randomly over the image frames. The population of particles is drifted towards the areas of the main image motion, driven by the PSO fitness function aimed at minimizing the interaction force, so as to model the most diffused, normal, behavior of the crowd. In this way, anomalies can be detected by checking if some particles (forces) do not fit the estimated distribution, and this is done by a RANSAC-like method followed by a segmentation algorithm to finely localize the abnormal areas. A large set of experiments are carried out on public available datasets, and results show the consistent higher performances of the proposed method as compared to other state-of-the-art algorithms, proving the goodness of the proposed approach.",
"title": ""
},
{
"docid": "6afcc3c2e0c67823348cf89a0dfec9db",
"text": "BACKGROUND\nThe consumption of dietary protein is important for resistance-trained individuals. It has been posited that intakes of 1.4 to 2.0 g/kg/day are needed for physically active individuals. Thus, the purpose of this investigation was to determine the effects of a very high protein diet (4.4 g/kg/d) on body composition in resistance-trained men and women.\n\n\nMETHODS\nThirty healthy resistance-trained individuals participated in this study (mean ± SD; age: 24.1 ± 5.6 yr; height: 171.4 ± 8.8 cm; weight: 73.3 ± 11.5 kg). Subjects were randomly assigned to one of the following groups: Control (CON) or high protein (HP). The CON group was instructed to maintain the same training and dietary habits over the course of the 8 week study. The HP group was instructed to consume 4.4 grams of protein per kg body weight daily. They were also instructed to maintain the same training and dietary habits (e.g. maintain the same fat and carbohydrate intake). Body composition (Bod Pod®), training volume (i.e. volume load), and food intake were determined at baseline and over the 8 week treatment period.\n\n\nRESULTS\nThe HP group consumed significantly more protein and calories pre vs post (p < 0.05). Furthermore, the HP group consumed significantly more protein and calories than the CON (p < 0.05). The HP group consumed on average 307 ± 69 grams of protein compared to 138 ± 42 in the CON. When expressed per unit body weight, the HP group consumed 4.4 ± 0.8 g/kg/d of protein versus 1.8 ± 0.4 g/kg/d in the CON. There were no changes in training volume for either group. Moreover, there were no significant changes over time or between groups for body weight, fat mass, fat free mass, or percent body fat.\n\n\nCONCLUSIONS\nConsuming 5.5 times the recommended daily allowance of protein has no effect on body composition in resistance-trained individuals who otherwise maintain the same training regimen. This is the first interventional study to demonstrate that consuming a hypercaloric high protein diet does not result in an increase in body fat.",
"title": ""
},
{
"docid": "5c03be451f3610f39c94043d30314617",
"text": "Syphilis is a sexually transmitted disease (STD) produced by Treponema pallidum, which mainly affects humans and is able to invade practically any organ in the body. Its infection facilitates the transmission of other STDs. Since the end of the last decade, successive outbreaks of syphilis have been reported in most western European countries. Like other STDs, syphilis is a notifiable disease in the European Union. In Spain, epidemiological information is obtained nationwide via the country's system for recording notifiable diseases (Spanish acronym EDO) and the national microbiological information system (Spanish acronym SIM), which compiles information from a network of 46 sentinel laboratories in twelve Spanish regions. The STDs that are epidemiologically controlled are gonococcal infection, syphilis, and congenital syphilis. The incidence of each of these diseases is recorded weekly. The information compiled indicates an increase in the cases of syphilis and gonococcal infection in Spain in recent years. According to the EDO, in 1999, the number of cases of syphilis per 100,000 inhabitants was recorded to be 1.69, which has risen to 4.38 in 2007. In this article, we review the reappearance and the evolution of this infectious disease in eight European countries, and alert dentists to the importance of a) diagnosing sexually-transmitted diseases and b) notifying the centres that control them.",
"title": ""
},
{
"docid": "199df544c19711fbee2dd49e60956243",
"text": "Languages vary strikingly in how they encode motion events. In some languages (e.g. English), manner of motion is typically encoded within the verb, while direction of motion information appears in modifiers. In other languages (e.g. Greek), the verb usually encodes the direction of motion, while the manner information is often omitted, or encoded in modifiers. We designed two studies to investigate whether these language-specific patterns affect speakers' reasoning about motion. We compared the performance of English and Greek children and adults (a) in nonlinguistic (memory and categorization) tasks involving motion events, and (b) in their linguistic descriptions of these same motion events. Even though the two linguistic groups differed significantly in terms of their linguistic preferences, their performance in the nonlinguistic tasks was identical. More surprisingly, the linguistic descriptions given by subjects within language also failed to correlate consistently with their memory and categorization performance in the relevant regards. For the domain studied, these results are consistent with the view that conceptual development and organization are largely independent of language-specific labeling practices. The discussion emphasizes that the necessarily sketchy nature of language use assures that it will be at best a crude index of thought.",
"title": ""
},
{
"docid": "ba29af46fd410829c450eed631aa9280",
"text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.",
"title": ""
},
{
"docid": "9584909fc62cca8dc5c9d02db7fa7e5d",
"text": "As the nature of many materials handling tasks have begun to change from lifting to pushing and pulling, it is important that one understands the biomechanical nature of the risk to which the lumbar spine is exposed. Most previous assessments of push-pull tasks have employed models that may not be sensitive enough to consider the effects of the antagonistic cocontraction occurring during complex pushing and pulling motions in understanding the risk to the spine and the few that have considered the impact of cocontraction only consider spine load at one lumbar level. This study used an electromyography-assisted biomechanical model sensitive to complex motions to assess spine loadings throughout the lumbar spine as 10 males and 10 females pushed and pulled loads at three different handle heights and of three different load magnitudes. Pulling induced greater spine compressive loads than pushing, whereas the reverse was true for shear loads at the different lumbar levels. The results indicate that, under these conditions, anterior-posterior (A/P) shear loads were of sufficient magnitude to be of concern especially at the upper lumbar levels. Pushing and pulling loads equivalent to 20% of body weight appeared to be the limit of acceptable exertions, while pulling at low and medium handle heights (50% and 65% of stature) minimised A/P shear. These findings provide insight to the nature of spine loads and their potential risk to the low back during modern exertions.",
"title": ""
},
{
"docid": "c6d26dddce25dec91534bf5481f64c28",
"text": "We propose a new approach to image segmentation, which exploits the advantages of both conditional random fields (CRFs) and decision trees. In the literature, the potential functions of CRFs are mostly defined as a linear combination of some predefined parametric models, and then, methods, such as structured support vector machines, are applied to learn those linear coefficients. We instead formulate the unary and pairwise potentials as nonparametric forests—ensembles of decision trees, and learn the ensemble parameters and the trees in a unified optimization problem within the large-margin framework. In this fashion, we easily achieve nonlinear learning of potential functions on both unary and pairwise terms in CRFs. Moreover, we learn classwise decision trees for each object that appears in the image. Experimental results on several public segmentation data sets demonstrate the power of the learned nonlinear nonparametric potentials.",
"title": ""
},
{
"docid": "430993dbb8fe6cd6c7acdf613424e608",
"text": "Deep learning algorithms have recently produced state-of-the-art accuracy in many classification tasks, but this success is typically dependent on access to many annotated training examples. For domains without such data, an attractive alternative is to train models with light, or distant supervision. In this paper, we introduce a deep neural network for the Learning from Label Proportion (LLP) setting, in which the training data consist of bags of unlabeled instances with associated label distributions for each bag. We introduce a new regularization layer, Batch Averager, that can be appended to the last layer of any deep neural network to convert it from supervised learning to LLP. This layer can be implemented readily with existing deep learning packages. To further support domains in which the data consist of two conditionally independent feature views (e.g. image and text), we propose a co-training algorithm that iteratively generates pseudo bags and refits the deep LLP model to improve classification accuracy. We demonstrate our models on demographic attribute classification (gender and race/ethnicity), which has many applications in social media analysis, public health, and marketing. We conduct experiments to predict demographics of Twitter users based on their tweets and profile image, without requiring any user-level annotations for training. We find that the deep LLP approach outperforms baselines for both text and image features separately. Additionally, we find that co-training algorithm improves image and text classification by 4% and 8% absolute F1, respectively. Finally, an ensemble of text and image classifiers further improves the absolute F1 measure by 4% on average.",
"title": ""
},
{
"docid": "7ed693c8f8dfa62842304f4c6783af03",
"text": "Indian Sign Language (ISL) or Indo-Pakistani Sign Language is possibly the prevalent sign language variety in South Asia used by at least several hundred deaf signers. It is different in the phonetics, grammar and syntax from other country’s sign languages. Since ISL got standardized only recently, there is very little research work that has happened in ISL recognition. Considering the challenges in ISL gesture recognition, a novel method for recognition of static signs of Indian sign language alphabets and numerals for Human Computer Interaction (HCI) has been proposed in this thesis work. The developed algorithm for the hand gesture recognition system in ISL formulates a vision-based approach, using the Two-Dimensional Discrete Cosine Transform (2D-DCT) for image compression and the Self-Organizing Map (SOM) or Kohonen Self Organizing Feature Map (SOFM) Neural Network for pattern recognition purpose, simulated in MATLAB. To design an efficient and user friendly hand gesture recognition system, a GUI model has been implemented. The main advantage of this algorithm is its high-speed processing capability and low computational requirements, in terms of both speed and memory utilization. KeywordsArtificial Neural Network, Hand Gesture Recognition, Human Computer Interaction (HCI), Indian Sign Language (ISL), Kohonen Self Organizing Feature Map (SOFM), Two-Dimensional Discrete Cosine Transform (2D-",
"title": ""
},
{
"docid": "3c014205609a8bbc2f5e216d7af30b32",
"text": "This paper proposes a novel design for variable-flux machines with Alnico magnets. The proposed design uses tangentially magnetized magnets to achieve high air-gap flux density and to avoid demagnetization by the armature field. Barriers are also inserted in the rotor to limit the armature flux and to allow the machine to utilize both reluctance and magnet torque components. An analytical procedure is first applied to obtain the initial machine design parameters. Then, several modifications are applied to the stator and rotor designs through finite-element analysis (FEA) simulations to improve machine efficiency and torque density. A prototype of the proposed design is built, and the experimental results are in good correlation with the FEA simulations, confirming the validity of the proposed machine design concept.",
"title": ""
},
{
"docid": "bef0eaf89164e6ffeabc758a6c93840b",
"text": "Modern instruction set decoders feature translation of native instructions into internal micro-ops to simplify CPU design and improve instruction-level parallelism. However, this translation is static in most known instances. This work proposes context-sensitive decoding, a technique that enables customization of the micro-op translation at the microsecond or faster granularity, based on the current execution context and/or preset hardware events. While there are many potential applications, this work demonstrates its effectiveness with two use cases: 1) as a novel security defense to thwart instruction/data cache-based side-channel attacks, as demonstrated on commercial implementations of RSA and AES and 2) as a power management technique that performs selective devectorization to enable efficient unit-level power gating. This architecture, first by allowing execution to transition between different translation modes rapidly, defends against a variety of attacks, completely obfuscating code-dependent cache access, only sacrificing 5% in steady-state performance – orders of magnitude less than prior art. By selectively disabling the vector units without disabling vector arithmetic, context-sensitive decoding reduces energy by 12.9% with minimal loss in performance. Both optimizations work with no significant changes to the pipeline or the external ISA.",
"title": ""
},
{
"docid": "6ecf5cb70cca991fbefafb739a0a44c9",
"text": "Reasoning about objects, relations, and physics is central to human intelligence, and 1 a key goal of artificial intelligence. Here we introduce the interaction network, a 2 model which can reason about how objects in complex systems interact, supporting 3 dynamical predictions, as well as inferences about the abstract properties of the 4 system. Our model takes graphs as input, performs objectand relation-centric 5 reasoning in a way that is analogous to a simulation, and is implemented using 6 deep neural networks. We evaluate its ability to reason about several challenging 7 physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. 8 Our results show it can be trained to accurately simulate the physical trajectories of 9 dozens of objects over thousands of time steps, estimate abstract quantities such 10 as energy, and generalize automatically to systems with different numbers and 11 configurations of objects and relations. Our interaction network implementation 12 is the first general-purpose, learnable physics engine, and a powerful general 13 framework for reasoning about object and relations in a wide variety of complex 14 real-world domains. 15",
"title": ""
},
{
"docid": "bf0531b03cc36a69aca1956b21243dc6",
"text": "Sound of their breath fades with the light. I think about the loveless fascination, Under the milky way tonight. Lower the curtain down in memphis, Lower the curtain down all right. I got no time for private consultation, Under the milky way tonight. Wish I knew what you were looking for. Might have known what you would find. And it's something quite peculiar, Something thats shimmering and white. It leads you here despite your destination, Under the milky way tonight (chorus) Preface This Master's Thesis concludes my studies in Human Aspects of Information Technology (HAIT) at Tilburg University. It describes the development, implementation, and analysis of an automatic mood classifier for music. I would like to thank those who have contributed to and supported the contents of the thesis. Special thanks goes to my supervisor Menno van Zaanen for his dedication and support during the entire process of getting started up to the final results. Moreover, I would like to express my appreciation to Fredrik Mjelle for providing the user-tagged instances exported out of the MOODY database, which was used as the dataset for the experiments. Furthermore, I would like to thank Toine Bogers for pointing me out useful website links regarding music mood classification and sending me papers with citations and references. I would also like to thank Michael Voong for sending me his papers on music mood classification research, Jaap van den Herik for his support and structuring of my writing and thinking. I would like to recognise Eric Postma and Marieke van Erp for their time assessing the thesis as members of the examination committee. Finally, I would like to express my gratitude to my family for their enduring support. Abstract This research presents the outcomes of research into using the lingual part of music for building an automatic mood classification system. Using a database consisting of extracted lyrics and user-tagged mood attachments, we built a classifier based on machine learning techniques. By testing the classification system on various mood frameworks (or dimensions) we examined to what extent it is possible to attach mood tags automatically to songs based on lyrics only. Furthermore, we examined to what extent the linguistic part of music revealed adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that the use of term frequencies and tf*idf values provide a valuable source of …",
"title": ""
},
{
"docid": "f96eb97ea9300632cfae02084455946e",
"text": "A planar folded dipole antenna that exhibits wideband characteristics is proposed. The antenna has simple planar construction without a ground plane and is easy to be assembled. Parameter values are adjusted in order to obtain wideband properties and compactness by using an electromagnetic simulator based on the method of moments. An experimental result centered at 1.7 GHz for 50 impedance matching shows that the antenna has bandwidth over 55% . The gains of the antenna are almost constant (2 dBi) in this frequency band and the radiation patterns are very similar to those of a normal dipole antenna. It is also shown that the antenna has a self-balanced impedance property in this frequency band.",
"title": ""
},
{
"docid": "9131f56c00023a3402b602940be621bb",
"text": "Location estimation of a wireless capsule endoscope at 400 MHz MICS band is implemented here using both RSSI and TOA-based techniques and their performance investigated. To improve the RSSI-based location estimation, a maximum likelihood (ML) estimation method is employed. For the TOA-based localization, FDTD coupled with continuous wavelet transform (CWT) is used to estimate the time of arrival and localization is performed using multilateration. The performances of the proposed localization algorithms are evaluated using a computational heterogeneous biological tissue phantom in the 402MHz-405MHz MICS band. Our investigations reveal that the accuracy obtained by TOA based method is superior to RSSI based estimates. It has been observed that the ML method substantially improves the accuracy of the RSSI-based location estimation.",
"title": ""
}
] |
scidocsrr
|
5bbe97ae81ac959a40146de3a5680d52
|
Artificial Intelligence and Economic Growth
|
[
{
"docid": "4fa7ee44cdc4b0cd439723e9600131bd",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "84b8e98e143c0bfba79506c44ea12e6d",
"text": "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented. _What is The Singularity?_ The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur): o The development of computers that are \"awake\" and superhumanly intelligent. (To date, most controversy in the area of AI relates to whether we can create human equivalence in a machine. But if the answer is \"yes, we can\", then there is little doubt that beings more intelligent can be constructed shortly thereafter. o Large computer networks (and their associated users) may \"wake up\" as a superhumanly intelligent entity. o Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. o Biological science may find ways to improve upon the natural human intellect. The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [16]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [19] has pointed out the AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.) What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work -the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct \"what if's\" in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals. From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in \"a million years\" (if ever) will likely happen in the next century. (In [4], Greg Bear paints a picture of the major changes happening in a matter of hours.) I think it's fair to call this event a singularity (\"the Singularity\" for the purposes of this paper). It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam [27] paraphrased John von Neumann as saying: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. Von Neumann even uses the term singularity, though it appears he is still thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed (see [24]).) In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote [10]: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an \"intelligence explosion,\" and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make. Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's \"tool\" -any more than humans are the tools of rabbits or robins or chimpanzees. Through the '60s and '70s and '80s, recognition of the cataclysm spread [28] [1] [30] [4]. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the \"hard\" science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future [23]. Now they saw that their most diligent extrapolations resulted in the unknowable ... soon. Once, galactic empires might have seemed a Post-Human domain. Now, sadly, even interplanetary ones are. What about the '90s and the '00s and the '10s, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, till we have hardware as powerful as a human brain it is probably foolish to think we'll be able to create human equivalent (or greater) intelligence. (There is the far-fetched possibility that we could make a human equivalent out of less powerful hardware, if were willing to give up speed, if we were willing to settle for an artificial being who was literally slow [29]. But it's much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment.) But as time passes, we should see more symptoms. The dilemma felt by science fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technically commonplace.) We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of _true_ technological unemployment finally come true. Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace. When I began writing, it seemed very easy to come up with ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like eighteen months. (Of course, this could just be me losing my imagination as I get old, but I see the effect in others too.) Like the shock in a compressible flow, the Singularity moves closer as we accelerate through the critical speed. And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected -perhaps even to the researchers involved. (\"But all our previous models were catatonic! We were just tweaking some parameters....\") If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened. And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era. And for all my rampant technological optimism, sometimes I think I'd be more comfortable if I were regarding these transcendental events from one thousand years remove ... instead of twenty. _Can the Singularity be Avoided?_ Well, maybe it won't happen at all: Sometimes I try to imagine the symptoms that we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose [18] and Searle [21] against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question \"How We Will Build a Machine that Thinks\" [Thearling]. As you might guess from the workshop's title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds. However, there was much debate about the raw hardware power that is present in organic brains.",
"title": ""
}
] |
[
{
"docid": "248adf4ee726dce737b7d0cbe3334ea3",
"text": "People can often find themselves out of their depth when they face knowledge-based problems, such as faulty technology, or medical concerns. This can also happen in everyday domains that users are simply inexperienced with, like cooking. These are common exploratory search conditions, where users don’t quite know enough about the domain to know if they are submitting a good query, nor if the results directly resolve their need or can be translated to do so. In such situations, people turn to their friends for help, or to forums like StackOverflow, so that someone can explain things to them and translate information to their specific need. This short paper describes work-in-progress within a Google-funded project focusing on Search Literacy in these situations, where improved search skills will help users to learn as they search, to search better, and to better comprehend the results. Focusing on the technology-problem domain, we present initial results from a qualitative study of questions asked and answers given in StackOverflow, and present plans for designing search engine support to help searchers learn as they search.",
"title": ""
},
{
"docid": "a7a3966aca3881430cd379ed42828e1b",
"text": "From rule-based to data-driven lexical entrainment models in spoken dialog systems José Lopes a,b,∗, Maxine Eskenazi c, Isabel Trancoso a,b a Spoken Language Laboratory, INESC-ID Lisboa, Rua Alves Redol 9, 1000-029 Lisboa, Portugal b Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa, Portugal c Language Technologies Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA",
"title": ""
},
{
"docid": "f0432af5265a08ccde0111d2d05b93e2",
"text": "Cyber security is a critical issue now a days in various different domains in different disciplines. This paper presents a review analysis of cyber hacking attacks along with its experimental results and proposes a new methodology 3SEMCS named as three step encryption method for cyber security. By utilizing this new designed methodology, security at highest level will be easily provided especially on the time of request submission in the search engine as like google during client server communication. During its working a group of separate encryption algorithms are used. The benefit to utilize this three step encryption is to provide more tighten security by applying three separate encryption algorithms in each phase having different operations. And the additional benefit to utilize this methodology is to run over new designed private browser named as “RR” that is termed as Rim Rocks correspondingly this also help to check the authenticated sites or phishing sites by utilizing the strategy of passing URL address from phishing tank. This may help to block the phisher sites and user will relocate on previous page. The purpose to design this personnel browser is to enhance the level of security by",
"title": ""
},
{
"docid": "4f846635e4f23b7630d0c853559f71dc",
"text": "Parkinson's disease, known also as striatal dopamine deficiency syndrome, is a degenerative disorder of the central nervous system characterized by akinesia, muscular rigidity, tremor at rest, and postural abnormalities. In early stages of parkinsonism, there appears to be a compensatory increase in the number of dopamine receptors to accommodate the initial loss of dopamine neurons. As the disease progresses, the number of dopamine receptors decreases, apparently due to the concomitant degeneration of dopamine target sites on striatal neurons. The loss of dopaminergic neurons in Parkinson's disease results in enhanced metabolism of dopamine, augmenting the formation of H2O2, thus leading to generation of highly neurotoxic hydroxyl radicals (OH.). The generation of free radicals can also be produced by 6-hydroxydopamine or MPTP which destroys striatal dopaminergic neurons causing parkinsonism in experimental animals as well as human beings. Studies of the substantia nigra after death in Parkinson's disease have suggested the presence of oxidative stress and depletion of reduced glutathione; a high level of total iron with reduced level of ferritin; and deficiency of mitochondrial complex I. New approaches designed to attenuate the effects of oxidative stress and to provide neuroprotection of striatal dopaminergic neurons in Parkinson's disease include blocking dopamine transporter by mazindol, blocking NMDA receptors by dizocilpine maleate, enhancing the survival of neurons by giving brain-derived neurotrophic factors, providing antioxidants such as vitamin E, or inhibiting monoamine oxidase B (MAO-B) by selegiline. Among all of these experimental therapeutic refinements, the use of selegiline has been most successful in that it has been shown that selegiline may have a neurotrophic factor-like action rescuing striatal neurons and prolonging the survival of patients with Parkinson's disease.",
"title": ""
},
{
"docid": "86cdce8b04818cc07e1003d85305bd40",
"text": "Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.",
"title": ""
},
{
"docid": "92fb94c947ec85ef7fe44be24e0e2c34",
"text": "This paper describes the Microsoft submission to the WMT2018 news translation shared task. We participated in one language direction – English-German. Our system follows current best-practice and combines state-of-theart models with new data filtering (dual conditional cross-entropy filtering) and sentence weighting methods. We trained fairly standard Transformer-big models with an updated version of Edinburgh’s training scheme for WMT2017 and experimented with different filtering schemes for Paracrawl. According to automatic metrics (BLEU) we reached the highest score for this subtask with a nearly 2 BLEU point margin over the next strongest system. Based on human evaluation we ranked first among constrained systems. We believe this is mostly caused by our data filtering/weighting regime.",
"title": ""
},
{
"docid": "39a4914ad4f793d8ce412aa169736e75",
"text": "We present a metamaterial that acts as a strongly resonant absorber at terahertz frequencies. Our design consists of a bilayer unit cell which allows for maximization of the absorption through independent tuning of the electrical permittivity and magnetic permeability. An experimental absorptivity of 70% at 1.3 terahertz is demonstrated. We utilize only a single unit cell in the propagation direction, thus achieving an absorption coefficient alpha = 2000 cm(-1). These metamaterials are promising candidates as absorbing elements for thermally based THz imaging, due to their relatively low volume, low density, and narrow band response.",
"title": ""
},
{
"docid": "4b9fe62a497ffe0fe6e669542843292d",
"text": "Autonomous robot navigation through unknown, cluttered environments at high-speeds is still an open problem. Quadrotor platforms with this capability have only begun to emerge with the advancements in light-weight, small form factor sensing and computing. Many of the existing platforms, however, require excessive computation time to perform collision avoidance, which ultimately limits the vehicle's top speed. This work presents an efficient perception and planning approach that significantly reduces the computation time by using instantaneous perception data for collision avoidance. Minimum-time, state and input constrained motion primitives are generated by sampling terminal states until a collision-free path is found. The worst case performance of the Triple Integrator Planner (TIP) is nearly an order of magnitude faster than the state-of-the-art. Experimental results demonstrate the algorithm's ability to plan and execute aggressive collision avoidance maneuvers in highly cluttered environments.",
"title": ""
},
{
"docid": "0d6960b2817f98924f7de3b7d7774912",
"text": "Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.",
"title": ""
},
{
"docid": "f8cc1cf257711c83464a98b3d9167c94",
"text": "A Software Repository is a collection of library files and function codes. Programmers and Engineers design develop and build software libraries in a continuous process. Selecting suitable function code from one among many in the repository is quite challenging and cumbersome as we need to analyze semantic issues in function codes or components. Clustering and Mining Software Components for efficient reuse is the current topic of interest among researchers in Software Reuse Engineering and Information Retrieval. A relatively less research work is contributed in this field and has a good scope in the future. In this paper, the main idea is to cluster the software components and form a subset of libraries from the available repository. These clusters thus help in choosing the required component with high cohesion and low coupling quickly and efficiently. We define a similarity function and use the same for the process of clustering the software components and for estimating the cost of new project. The approach carried out is a feature vector based approach. © 2014 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of the organizers of ITQM 2014",
"title": ""
},
{
"docid": "7e6c95fbaa356dfa5c95e370f23c8c92",
"text": "Volume II of the subject guide for 2910227, Interactive Multimedia introduced the very basics of metadata-and content-based mutimedia information retrieval. This chapter expands on that introduction in the specific context of music, giving an overview of the field of music information retrieval, some currently existing systems (whether research prototypes or commercially-deployed) and how they work, and some examples of problems yet unsolved. Figure 1.1 enumerates a number of tasks commonly attempted in the field of Music Information Retrieval, arranged by 'specificity', which can be thought of as how discriminating a particular task is, or how clear is the demarcation between relevant and non-relevant (or 'right' and 'wrong') retrieval results. As will become clear through the course of this chapter, these and other tasks in Music Information Retrieval have applications in domains as varied digital libraries, consumer digital devices, content delivery and musical performance. specificity high low music identification rights management, plagiarism detection multiple version handling melody extraction and retrieval performer or composer identification recommender systems style, mood, genre detection music-speech segmentation Figure 1.1: An enumeration of some tasks in the general field of Music Information Retrieval, arranged on a scale of 'specificity' after Byrd (2008); Casey et al. (2008). The specificity of a retrieval task relates to how much acoustic and musical material a retrieved result must share with a query to be considered relevant, and how many documents in total could be considered relevant retrieval results. This chapter includes a number of references to the published scientific literature.",
"title": ""
},
{
"docid": "ee2c37fd2ebc3fd783bfe53213e7470e",
"text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "cf31b8eb971e89d4521c4a70cf181bc3",
"text": "In this paper we address the problem of scalable, native and adaptive query processing over Linked Stream Data integrated with Linked Data. Linked Stream Data consists of data generated by stream sources, e.g., sensors, enriched with semantic descriptions, following the standards proposed for Linked Data. This enables the integration of stream data with Linked Data collections and facilitates a wide range of novel applications. Currently available systems use a “black box” approach which delegates the processing to other engines such as stream/event processing engines and SPARQL query processors by translating to their provided languages. As the experimental results described in this paper show, the need for query translation and data transformation, as well as the lack of full control over the query execution, pose major drawbacks in terms of efficiency. To remedy these drawbacks, we present CQELS (Continuous Query Evaluation over Linked Streams), a native and adaptive query processor for unified query processing over Linked Stream Data and Linked Data. In contrast to the existing systems, CQELS uses a “white box” approach and implements the required query operators natively to avoid the overhead and limitations of closed system regimes. CQELS provides a flexible query execution framework with the query processor dynamically adapting to the changes in the input data. During query execution, it continuously reorders operators according to some heuristics to achieve improved query execution in terms of delay and complexity. Moreover, external disk access on large Linked Data collections is reduced with the use of data encoding and caching of intermediate query results. To demonstrate the efficiency of our approach, we present extensive experimental performance evaluations in terms of query execution time, under varied query types, dataset sizes, and number of parallel queries. These results show that CQELS outperforms related approaches by orders of magnitude.",
"title": ""
},
{
"docid": "914f9bf7d24d0a0ee8c42e1263a04646",
"text": "With the rapid growth in the usage of social networks worldwide, uploading and sharing of user-generated content, both text and visual, has become increasingly prevalent. An analysis of the content a user shares and engages with can provide valuable insights into an individual's preferences and lifestyle. In this paper, we present a system to automatically infer a user's interests by analysing the content of the photos they share online. We propose a way to leverage web image search engines for detecting high-level semantic concepts, such as interests, in images, without relying on a large set of labeled images. We demonstrate the effectiveness of our system through quantitative and qualitative results on data collected from Instagram.",
"title": ""
},
{
"docid": "ca932a0b6b71f009f95bad6f2f3f8a38",
"text": "Page 13 Supply chain management is increasingly being recognized as the integration of key business processes across the supply chain. For example, Hammer argues that now that companies have implemented processes within the firm, they need to integrate them between firms: Streamlining cross-company processes is the next great frontier for reducing costs, enhancing quality, and speeding operations. It is where this decade’s productivity wars will be fought. The victors will be those companies that are able to take a new approach to business, working closely with partners to design and manage processes that extend across traditional corporate boundaries. They will be the ones that make the leap from efficiency to super efficiency [1]. Monczka and Morgan also focus on the importance of process integration in supply chain management [2]. The piece that seems to be missing from the literature is a comprehensive definition of the processes that constitute supply chain management. How can companies achieve supply chain integration if there is not a common understanding of the key business processes? It seems that in order to build links between supply chain members it is necessary for companies to implement a standard set of supply chain processes. Practitioners and educators need a common definition of supply chain management, and a shared understanding of the processes. We recommend the definition of supply chain management developed and used by The Global Supply Chain Forum: Supply Chain Management is the integration of key business processes from end user through original suppliers that provides products, services, and information that add value for customers and other stakeholders [3]. The Forum members identified eight key processes that need to be implemented within and across firms in the supply chain. To date, The Supply Chain Management Processes",
"title": ""
},
{
"docid": "c98bdc262bbc53b5858bea7598f85b6c",
"text": "Parallel corpora have driven great progress in the field of Text Simplification. However, most sentence alignment algorithms either offer a limited range of alignment types supported, or simply ignore valuable clues present in comparable documents. We address this problem by introducing a new set of flexible vicinity-driven paragraph and sentence alignment algorithms that 1-N, N-1, N-N and long distance null alignments without the need for hard-toreplicate supervised models.",
"title": ""
},
{
"docid": "9290ca06a925f8e52f445feb3f0a257a",
"text": "Multi-task learning is a promising approach for efficiently and effectively addressing multiple mutually related recognition tasks. Many scene understanding tasks such as semantic segmentation and depth prediction can be framed as cross-modal encoding/decoding, and hence most of the prior work used multi-modal datasets for multi-task learning. However, the inter-modal commonalities, such as one across image, depth, and semantic labels, have not been fully exploited. We propose a multi-modal encoder-decoder networks to harness the multi-modal nature of multi-task scene recognition. In addition to the shared latent representation among encoder-decoder pairs, our model also has shared skip connections from different encoders. By combining these two representation sharing mechanisms, the proposed method efficiently learns a shared feature representation among all modalities in the training data. Experiments using two public datasets shows the advantage of our method over baseline methods that are based on encoder-decoder networks and multi-modal auto-encoders.",
"title": ""
},
{
"docid": "5c444fcd85dd89280eee016fd1cbd175",
"text": "Over the last years, object detection has become a more and more active field of research in robotics. An important problem in object detection is the need for sufficient labeled training data to learn good classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by leveraging data sets available on the World Wide Web. Specifically, we show how to use objects from Google’s 3D Warehouse to train an object detection system for 3D point clouds collected by robots navigating through both urban and indoor environments. In order to deal with the different characteristics of the web data and the real robot data, we additionally use a small set of labeled point clouds and perform domain adaptation. Our experiments demonstrate that additional data taken from the 3D Warehouse along with our domain adaptation greatly improves the classification accuracy on real-world environments.",
"title": ""
},
{
"docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a",
"text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.",
"title": ""
}
] |
scidocsrr
|
82d15811583e63a67c7ba60179cfd8bb
|
Marble MLFQ: An educational visualization tool for the multilevel feedback queue algorithm
|
[
{
"docid": "b43118e150870aab96af1a7b32515202",
"text": "Algorithm visualization (AV) technology graphically illustrates how algorithms work. Despite the intuitive appeal of the technology, it has failed to catch on in mainstream computer science education. Some have attributed this failure to the mixed results of experimental studies designed to substantiate AV technology’s educational effectiveness. However, while several integrative reviews of AV technology have appeared, none has focused specifically on the software’s effectiveness by analyzing this body of experimental studies as a whole. In order to better understand the effectiveness of AV technology, we present a systematic metastudy of 24 experimental studies. We pursue two separate analyses: an analysis of independent variables, in which we tie each study to a particular guiding learning theory in an attempt to determine which guiding theory has had the most predictive success; and an analysis of dependent variables, which enables us to determine which measurement techniques have been most sensitive to the learning benefits of AV technology. Our most significant finding is that how students use AV technology has a greater impact on effectiveness than what AV technology shows them. Based on our findings, we formulate an agenda for future research into AV effectiveness. A META-STUDY OF ALGORITHM VISUALIZATION EFFECTIVENESS 3",
"title": ""
}
] |
[
{
"docid": "d4ac0d6890cc89e2525b9537376cce39",
"text": "Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.",
"title": ""
},
{
"docid": "8344bc2e3165bd2a3a426d3c3699257f",
"text": "We present a methodology for designing and implementing interactive intelligences. The Constructionist Design Methodology (CDM) – so called because it advocates modular building blocks and incorporation of prior work – addresses factors that we see as key to future advances in A.I., including interdisciplinary collaboration support, coordination of teams and large-scale systems integration. We test the methodology by building an interactive multi-functional system with a real-time perception-action loop. The system, whose construction relied entirely on the methodology, consists of an embodied virtual agent that can perceive both real and virtual objects in an augmented-reality room and interact with a user through coordinated gestures and speech. Wireless tracking technologies give the agent awareness of the environment and the user’s speech and communicative acts. User and agent can communicate about things in the environment, their placement and function, as well as more abstract topics such as current news, through situated multimodal dialog. The results demonstrate CDM’s strength in simplifying the modeling of complex, multi-functional systems requiring architectural experimentation and exploration of unclear sub-system boundaries, undefined variables, and tangled data flow and control hierarchies. Introduction The creation of embodied humanoids and broad A.I. systems requires integration of a large number of functionalities that must be carefully coordinated to achieve coherent system behavior. We are working on formalizing a methodology that can help in this process. The architectural foundation we have chosen for the approach is based on the concept of a network of interacting modules, communicating via messages. To test the design methodology we chose a system with a human user that interacts in real-time with a simulated human, in an augmented-reality environment. In this paper we present the design methodology and describe the system that we built to test it. Newell [1992] urged for the search of unified theories of cognition, and recent work in A.I. has increasingly focused on integration of multiple systems (cf. [Simmons et 1 While Newell’s architecture Soar is based on a small set of general principles, intended to explain a wide range of cognitive phenomena, Newell makes it very clear in his book [Newell 1992] that he does not consider Soar to be the unified theory of cognition. We read his call for unification not in the narrow sense to mean the particular premises he chose for Soar, but rather in the more broad sense to refer to the general breadth of cognitive models. Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 2 al. 2003, McCarthy et al. 2002, Bischoff et al. 1999]). Unified theories necessarily mean integration of many functionalities, but our prior experience in building systems that integrate multiple features from artificial intelligence and computer graphics [Bryson & Thórisson 2000, Lucente 2000, Thórisson 1999] has made it very clear that such integration can be a challenge, even for a team of experienced developers. In addition to basic technical issues – connecting everything together can be prohibitive in terms of time – it can be difficult to get people with different backgrounds, such as computer graphics, hardware, and artificial intelligence, to communicate effectively. Coordinating such an effort can thus be a management task of a tall order; keeping all parties synchronized takes skill and time. On top of this comes the challenge of deciding the scope of the system: What seems simple to a computer graphics expert may in fact be a long-standing dream of the A.I. person, and vice versa. Several factors motivate our work. First, a much-needed move towards building on prior work in A.I., to promote incremental accumulation of knowledge in creating intelligent systems, is long overdue. The relatively small group who is working on broad models of mind, bridging across disciplines, needs better ways to share results and work together, and to work with others outside their field. To this end our principles foster re-usable software components, through a common middleware specification, and mechanisms for defining interfaces between components. Second, by focusing on the re-use of existing work we are able to support the construction of more powerful systems than otherwise possible, speeding up the path towards useful, deployable systems. Third, we believe that to study mental mechanisms they need to be embedded in a larger cognitive model with significant breadth, to contextualize their operation and enable their testing under boundary conditions. This calls for an increased focus on supporting large-scale integration and experimentation. Fourth, by bridging across multiple functionalities in a single, unified system, researchers’ familiarity and breadth of experience with the various models of thought to date – as well as new ones – increases. This is important – as are in fact all of the above points – when the goal is to develop unified theories of cognition. Inspired to a degree by the classic LEGO bricks, our methodology – which we call a Constructionist Approach to A.I. – puts modularity at its center: Functionalities of the system are broken into individual software modules, which are typically larger than software classes (i.e. objects and methods) in object-oriented programming, but smaller than the typical enterprise application. The role of each module is determined in part by specifying the message types and information content that needs to flow between the various functional parts of the system. Using this functional outline we then define and develop, or select, components for perception, knowledge representation, planning, animation, and other desired functionalities. Behind this work lies the conjecture that the mind can be modeled through the adequate combination of interacting, functional machines (modules). Of course, this is still debated in the research community and not all researchers are convinced of its merits. However, this claim is in its essence simply a combination of two less radical ones. First, that a divide-and-conquer methodology will be fruitful in studying the mind as a system. Since practically all scientific results since the Greek philosophers are based on this, it is hard to argue against it. In contrast to the search for unified 2 http://www.MINDMAKERS.ORG Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 3 theories in physics, we see the search for unified theories of cognition in the same way as articulated in Minsky’s [1986] theory, that the mind is a multitude of interacting components, and his (perhaps whimsical but fundamental) claim that the brain is a hack. In other words, we expect a working model of the mind to incorporate, and coherently address, what at first seems a tangle of control hierarchies and data paths. Which relates to another important theoretical stance: The need to model more than a single or a handful of the mind’s mechanisms in isolation in order to understand the working mind. In a system of many modules with rich interaction, only a model incorporating a rich spectrum of (animal or human) mental functioning will give us a correct picture of the broad principles underlying intelligence. Figure 1: Our embodied agent Mirage is situated in the lab. Here we see how he appears to the user through the head-mounted glasses. (Image has been enhanced for clarity.) There is essentially nothing in the Constructionist approach to A.I. that lends it more naturally to behavior-based A.I. [c.f. Brooks 1991] or “classical” A.I. – its principles sit beside both. In fact, since CDM is intended to address the integration problem of very broad cognitive systems, it must be able to encompass all variants and approaches to date. We think it unlikely that any of the principles we present will be found objectionable, or even completely novel for that matter, by a seasoned software engineer. But these principles are custom-tailored to guide the construction of large cognitive systems, and we hope it will be used, extended and improved by many others over time. To test the power of a new methodology, a novel problem is preferred over one that has a known solution. The system we chose to develop presented us with a unique scope and unsolved integration issues: An augmented reality setting inhabited by an embodied virtual character; the character would be visible via a see-through stereoscopic display that the user wears, and would help them navigate the real-world environment. The character, called Mirage, should appear as a transparent, ghost-like 3 Personal communication, 1994. Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 4 stereoscopic 3-D graphic superimposed on the user’s real world view (Figure 1). This system served as a test-bed for our methodology; it is presented in sufficient detail here to demonstrate the application of the methodology and to show its modular philosophy, which it mirrors closely.",
"title": ""
},
{
"docid": "697580dda38c9847e9ad7c6a14ad6cd0",
"text": "Background: This paper describes an analysis that was conducted on newly collected repository with 92 versions of 38 proprietary, open-source and academic projects. A preliminary study perfomed before showed the need for a further in-depth analysis in order to identify project clusters.\n Aims: The goal of this research is to perform clustering on software projects in order to identify groups of software projects with similar characteristic from the defect prediction point of view. One defect prediction model should work well for all projects that belong to such group. The existence of those groups was investigated with statistical tests and by comparing the mean value of prediction efficiency.\n Method: Hierarchical and k-means clustering, as well as Kohonen's neural network was used to find groups of similar projects. The obtained clusters were investigated with the discriminant analysis. For each of the identified group a statistical analysis has been conducted in order to distinguish whether this group really exists. Two defect prediction models were created for each of the identified groups. The first one was based on the projects that belong to a given group, and the second one - on all the projects. Then, both models were applied to all versions of projects from the investigated group. If the predictions from the model based on projects that belong to the identified group are significantly better than the all-projects model (the mean values were compared and statistical tests were used), we conclude that the group really exists.\n Results: Six different clusters were identified and the existence of two of them was statistically proven: 1) cluster proprietary B -- T=19, p=0.035, r=0.40; 2) cluster proprietary/open - t(17)=3.18, p=0.05, r=0.59. The obtained effect sizes (r) represent large effects according to Cohen's benchmark, which is a substantial finding.\n Conclusions: The two identified clusters were described and compared with results obtained by other researchers. The results of this work makes next step towards defining formal methods of reuse defect prediction models by identifying groups of projects within which the same defect prediction model may be used. Furthermore, a method of clustering was suggested and applied.",
"title": ""
},
{
"docid": "39ad1394bc419f70c830b1ad9c90664f",
"text": "Building on a recent work of Harrison, Armstrong, Harrison, Iverson and Lange which suggested that Wechsler Adult Intelligence Scale–Fourth Edition (WAIS-IV) scores might systematically overestimate the severity of intellectual impairments if Canadian norms are used, the present study examined differences between Canadian and American derived WAIS-IV scores from 861 postsecondary students attending school across the province of Ontario, Canada. This broader data set confirmed a trend whereby individuals’ raw scores systematically produced lower standardized scores through the use of Canadian as opposed to American norms. The differences do not appear to be due to cultural, educational, or population differences, as participants acted as their own controls. The ramifications of utilizing the different norms were examined with regard to psychoeducational assessments and educational placement decisions particularly with respect to the diagnoses of Learning Disability and Intellectual Disability.",
"title": ""
},
{
"docid": "1a6a6c6721073e3664c6a0a2fdd20cfc",
"text": "This paper presents a new control strategy for a doubly fed induction generator (DFIG) under unbalanced network voltage conditions. Coordinated control of the grid- and rotor-side converters (GSC and RSC, respectively) during voltage unbalance is proposed. Under an unbalanced supply voltage, the RSC is controlled to eliminate the torque pulsation at double supply frequency. The oscillation of the stator output active power is then compensated by the active power output from the GSC, to ensure constant active power output from the overall DFIG generation system. In order to provide precise control of the positive- and negative-sequence currents of the GSC and RSC, a current control scheme consisting of a proportional integral (PI) controller and a resonant (R) compensator is presented. The PI plus R current regulator is implemented in the positive synchronous reference frame without the need to decompose the positive- and negative-sequence components. Simulations on a 1.5-MW DFIG system and experimental tests on a 1.5-kW prototype validate the proposed strategy. Precise control of both positive- and negative-sequence currents and simultaneous elimination of torque and total active power oscillations have been achieved.",
"title": ""
},
{
"docid": "b27224825bb28b9b8d0eea37f8900d42",
"text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.",
"title": ""
},
{
"docid": "703acc0a9c73c7c2b3ca68c635fec82f",
"text": "Purpose – Using 12 case studies, the purpose of this paper is to investigate the use of business analysis techniques in BPR. Some techniques are used more than others depending on the fit between the technique and the problem. Other techniques are preferred due to their versatility, easy to use, and flexibility. Some are difficult to use requiring skills that analysts do not possess. Problem analysis, and business process analysis and activity elimination techniques are preferred for process improvement projects, and technology analysis for technology problems. Root cause analysis (RCA) and activitybased costing (ABC) are seldom used. RCA requires specific skills and ABC is only applicable for discrete business activities. Design/methodology/approach – This is an exploratory case study analysis. The author analyzed 12 existing business reengineering (BR) case studies from the MIS literature. Cases include, but not limited to IBM Credit Union, Chase Manhattan Bank, Honeywell Corporation, and Cigna. Findings – The author identified eight business analysis techniques used in business process reengineering. The author found that some techniques are preferred over others. Some possible reasons are related to the fit between the analysis technique and the problem situation, the ease of useof-use of the chosen technique, and the versatility of the technique. Some BR projects require the use of several techniques, while others require just one. It appears that the problem complexity is correlated with the number of techniques required or used. Research limitations/implications – Small sample sizes are often subject to criticism about replication and generalizability of results. However, this research is a good starting point for expanding the sample to allowmore generalizable results. Future research may investigate the deeper connections between reengineering and analysis techniques and the risks of using various techniques to diagnose problems in multiple dimensions. An investigation of fit between problems and techniques could be explored. Practical implications – The author have a better idea which techniques are used more, which are more versatile, and which are difficult to use and why. Practitioners and academicians have a better understanding of the fit between technique and problem and how best to align them. It guides the selection of choosing a technique, and exposes potential problems. For example RCA requires knowledge of fishbone diagram construction and interpreting results. Unfamiliarity with the technique results in disaster and increases project risk. Understanding the issues helps to reduce project risk and increase project success, benefiting project teams, practitioners, and organizations. Originality/value –Many aspects of BR have been studied but the contribution of this research is to investigate relationships between business analysis techniques and business areas, referred to as BR dimensions. The author try to find answers to the following questions: first, are business analysis techniques used for BR project, and is there evidence that BR affects one or more areas of the business? Second, are BR projects limited to a single dimension? Third, are some techniques better suited for diagnosing problems in specific dimensions and are some techniques more difficult to use than others, if so why?; are some techniques used more than others, if so why?",
"title": ""
},
{
"docid": "f18dc5d572f60da7c85d50e6a42de2c9",
"text": "Recent developments in remote sensing are offering a promising opportunity to rethink conventional control strategies of wind turbines. With technologies such as LIDAR, the information about the incoming wind field - the main disturbance to the system - can be made available ahead of time. Feedforward control can be easily combined with traditional collective pitch feedback controllers and has been successfully tested on real systems. Nonlinear model predictive controllers adjusting both collective pitch and generator torque can further reduce structural loads in simulations but have higher computational times compared to feedforward or linear model predictive controller. This paper compares a linear and a commercial nonlinear model predictive controller to a baseline controller. On the one hand simulations show that both controller have significant improvements if used along with the preview of the rotor effective wind speed. On the other hand the nonlinear model predictive controller can achieve better results compared to the linear model close to the rated wind speed.",
"title": ""
},
{
"docid": "ed9b027bafedfa9305d11dca49ecc930",
"text": "This paper announces and discusses the experimental results from the Noisy Iris Challenge Evaluation (NICE), an iris biometric evaluation initiative that received worldwide participation and whose main innovation is the use of heavily degraded data acquired in the visible wavelength and uncontrolled setups, with subjects moving and at widely varying distances. The NICE contest included two separate phases: 1) the NICE.I evaluated iris segmentation and noise detection techniques and 2) the NICE:II evaluated encoding and matching strategies for biometric signatures. Further, we give the performance values observed when fusing recognition methods at the score level, which was observed to outperform any isolated recognition strategy. These results provide an objective estimate of the potential of such recognition systems and should be regarded as reference values for further improvements of this technology, which-if successful-may significantly broaden the applicability of iris biometric systems to domains where the subjects cannot be expected to cooperate.",
"title": ""
},
{
"docid": "577f373477f6b8a8bee6a694dab6d3c9",
"text": "The YouTube-8M video classification challenge requires teams to classify 0.7 million videos into one or more of 4,716 classes. In this Kaggle competition, we placed in the top 3% out of 650 participants using released video and audio features . Beyond that, we extend the original competition by including text information in the classification, making this a truly multi-modal approach with vision, audio and text. The newly introduced text data is termed as YouTube-8M-Text. We present a classification framework for the joint use of text, visual and audio features, and conduct an extensive set of experiments to quantify the benefit that this additional mode brings. The inclusion of text yields state-of-the-art results, e.g. 86.7% GAP on the YouTube-8M-Text validation dataset.",
"title": ""
},
{
"docid": "bd335c2fd0f866a8af83eab1458c0a4a",
"text": "Agile methodologies, in particular the framework SCRUM, are popular in software development companies. Most of the time, however, it is not feasible for these companies to apply every characteristic of the framework. This paper presents a hybrid application of verbal decision analysis methodologies in order to select some of the most relevant SCRUM approaches to be applied by a company. A questionnaire was developed and a group of experienced ScrumMasters was selected to answer it, aiming at characterizing every SCRUM approach into criteria values. The hybrid application consists in dividing the SCRUM practices into groups (stage supported by the ORCLASS method application), using the ORCLASSWEB tool. Then, the rank of the preferred practices will be generated by the application of the ZAPROS-LM method.",
"title": ""
},
{
"docid": "e090bb879e35dbabc5b3c77c98cd6832",
"text": "Immunity of analog circuit blocks is becoming a major design risk. This paper presents an automated methodology to simulate the susceptibility of a circuit during the design phase. More specifically, we propose a CAD tool which determines the fail/pass criteria of a signal under direct power injection (DPI). This contribution describes the function of the tool which is validated by a LDO regulator.",
"title": ""
},
{
"docid": "db8cbcc8a7d233d404a18a54cb9fedae",
"text": "Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.",
"title": ""
},
{
"docid": "b8c0e4b41334790155203533105a4d0d",
"text": "In our previous work, we have proposed the extended Karnaugh map representation (EKMR) scheme for multidimensional array representation. In this paper, we propose two data compression schemes, EKMR Compressed Row/ Column Storage (ECRS/ECCS), for multidimensional sparse arrays based on the EKMR scheme. To evaluate the proposed schemes, we compare them to the CRS/CCS schemes. Both theoretical analysis and experimental tests were conducted. In the theoretical analysis, we analyze the CRS/CCS and the ECRS/ ECCS schemes in terms of the time complexity, the space complexity, and the range of their usability for practical applications. In experimental tests, we compare the compressing time of sparse arrays and the execution time of matrixmatrix addition and matrix-matrix multiplication based on the CRS/CCS and the ECRS/ECCS schemes. The theoretical analysis and experimental results show that the ECRS/ECCS schemes are superior to the CRS/CCS schemes for all the evaluated criteria, except the space complexity in some cases.",
"title": ""
},
{
"docid": "f6193fa2ac2ea17c7710241a42d34a33",
"text": "BACKGROUND\nThe most common microcytic and hypochromic anemias are iron deficiency anemia and thalassemia trait. Several indices to discriminate iron deficiency anemia from thalassemia trait have been proposed as simple diagnostic tools. However, some of the best discriminative indices use parameters in the formulas that are only measured in modern counters and are not always available in small laboratories. The development of an index with good diagnostic accuracy based only on parameters derived from the blood cell count obtained using simple counters would be useful in the clinical routine. Thus, the aim of this study was to develop and validate a discriminative index to differentiate iron deficiency anemia from thalassemia trait.\n\n\nMETHODS\nTo develop and to validate the new formula, blood count data from 106 (thalassemia trait: 23 and iron deficiency: 83) and 185 patients (thalassemia trait: 30 and iron deficiency: 155) were used, respectively. Iron deficiency, β-thalassemia trait and α-thalassemia trait were confirmed by gold standard tests (low serum ferritin for iron deficiency anemia, HbA2>3.5% for β-thalassemia trait and using molecular biology for the α-thalassemia trait).\n\n\nRESULTS\nThe sensitivity, specificity, efficiency, Youden's Index, area under receiver operating characteristic curve and Kappa coefficient of the new formula, called the Matos & Carvalho Index were 99.3%, 76.7%, 95.7%, 76.0, 0.95 and 0.83, respectively.\n\n\nCONCLUSION\nThe performance of this index was excellent with the advantage of being solely dependent on the mean corpuscular hemoglobin concentration and red blood cell count obtained from simple automatic counters and thus may be of great value in underdeveloped and developing countries.",
"title": ""
},
{
"docid": "a53935e12b0a18d6555315149fdb4563",
"text": "With the prevalence of mobile devices such as smartphones and tablets, the ways people access to the Internet have changed enormously. In addition to the information that can be recorded by traditional Web-based e-commerce like frequent online shopping stores and browsing histories, mobile devices are capable of tracking sophisticated browsing behavior. The aim of this study is to utilize users' browsing behavior of reading hotel reviews on mobile devices and subsequently apply text-mining techniques to construct user interest profiles to make personalized hotel recommendations. Specifically, we design and implement an app where the user can search hotels and browse hotel reviews, and every gesture the user has performed on the touch screen when reading the hotel reviews is recorded. We then identify the paragraphs of hotel reviews that a user has shown interests based on the gestures the user has performed. Text mining techniques are applied to construct the interest profile of the user according to the review content the user has seriously read. We collect more than 5,000 reviews of hotels in Taipei, the largest metropolitan area of Taiwan, and recruit 18 users to participate in the experiment. Experimental results demonstrate that the recommendations made by our system better match the user's hotel selections than previous approaches.",
"title": ""
},
{
"docid": "316e4984bf6eef57a7f823b5303164f1",
"text": "Recent technical and infrastructural developments posit flipped (or inverted) classroom approaches ripe for exploration. Flipped classroom approaches have students use technology to access the lecture and other instructional resources outside the classroom in order to engage them in active learning during in-class time. Scholars and educators have reported a variety of outcomes of a flipped approach to instruction; however, the lack of a summary from these empirical studies prevents stakeholders from having a clear view of the benefits and challenges of this style of instruction. The purpose of this article is to provide a review of the flipped classroom approach in order to summarize the findings, to guide future studies, and to reflect the major achievements in the area of Computer Science (CS) education. 32 peer-reviewed articles were collected from a systematic literature search and analyzed based on a categorization of their main elements. The results of this survey show the direction of flipped classroom research during recent years and summarize the benefits and challenges of adopting a flipped approach in the classroom. Suggestions for future research include: describing in-detail the flipped approach; performing controlled experiments; and triangulating data from diverse sources. These future research efforts will reveal which aspects of a flipped classroom work better and under which circumstances and student groups. The findings will ultimately allow us to form best practices and a unified framework for guiding/assisting educators who want to adopt this teaching style.",
"title": ""
},
{
"docid": "4646848b959a356bb4d7c0ef14d53c2c",
"text": "Consumerization of IT (CoIT) is a key trend affecting society at large, including organizations of all kinds. A consensus about the defining aspects of CoIT has not yet been reached. Some refer to CoIT as employees bringing their own devices and technologies to work, while others highlight different aspects. While the debate about the nature and consequences of CoIT is still ongoing, many definitions have already been proposed. In this paper, we review these definitions and what is known about CoIT thus far. To guide future empirical research in this emerging area, we also review several established theories that have not yet been applied to CoIT but in our opinion have the potential to shed a deeper understanding on CoIT and its consequences. We discuss which elements of the reviewed theories are particularly relevant for understanding CoIT and thereby provide targeted guidance for future empirical research employing these theories. Overall, our paper may provide a useful starting point for addressing the lack of theorization in the emerging CoIT literature stream and stimulate discussion about theorizing CoIT.",
"title": ""
},
{
"docid": "50af85ca1f0c642cd74e713182f5ef58",
"text": "Commentators suggest that between 30 and 60% of large US firms have adopted the Balanced Scorecard, first described by Bob Kaplan and David Norton in their seminal Harvard Business Review paper of 1992 (Kaplan and Norton, 1992; Neely and Marr, 2003). Empirical evidence that explores the performance impact of the balanced scorecard, however, is extremely rare and much that is available is anecdotal at best. This paper reports a study that set out to explore the performance impact of the balanced scorecard by employing a quasiexperimental design. Up to three years worth of financial data were collected from two sister divisions of an electrical wholesale chain based in the UK, one of which had implemented the balanced scorecard and one of which had not. The relative performance improvements of matched pairs of branches were compared to establish what, if any, performance differentials existed between the branches that had implemented the balanced scorecard and those that had not. The key findings of the study include: (i) when analyzing just the data from Electrical – the business that implemented the balanced scorecard it appears that implementation of the balanced scorecard might have had a positive impact on sales, gross profit and net profit; but (ii) when comparing Electrical’s performance with its sister company these findings can be questioned. Clearly further work on this important topic is required in similar settings where natural experiments occur.",
"title": ""
},
{
"docid": "e2f2961ab8c527914c3d23f8aa03e4bf",
"text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.",
"title": ""
}
] |
scidocsrr
|
e27668bcb0ad5e7e56f08b9ec04f2b97
|
Cauchy Graph Embedding
|
[
{
"docid": "6228f059be27fa5f909f58fb60b2f063",
"text": "We propose a unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points. For semi-supervised dimension reduction, we aim to find the optimal prediction labels F for all the training samples X, the linear regression function h(X) and the regression residue F0 = F - h(X) simultaneously. Our new objective function integrates two terms related to label fitness and manifold smoothness as well as a flexible penalty term defined on the residue F0. Our Semi-Supervised learning framework, referred to as flexible manifold embedding (FME), can effectively utilize label information from labeled data as well as a manifold structure from both labeled and unlabeled data. By modeling the mismatch between h(X) and F, we show that FME relaxes the hard linear constraint F = h(X) in manifold regularization (MR), making it better cope with the data sampled from a nonlinear manifold. In addition, we propose a simplified version (referred to as FME/U) for unsupervised dimension reduction. We also show that our proposed framework provides a unified view to explain and understand many semi-supervised, supervised and unsupervised dimension reduction techniques. Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.",
"title": ""
},
{
"docid": "da168a94f6642ee92454f2ea5380c7f3",
"text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"title": ""
}
] |
[
{
"docid": "c5822fd932e29193a11e749a2d10df0b",
"text": "Online deception is disrupting our daily life, organizational process, and even national security. Existing approaches to online deception detection follow a traditional paradigm by using a set of cues as antecedents for deception detection, which may be hindered by ineffective cue identification. Motivated by the strength of statistical language models (SLMs) in capturing the dependency of words in text without explicit feature extraction, we developed SLMs to detect online deception. We also addressed the data sparsity problem in building SLMs in general and in deception detection in specific using smoothing and vocabulary pruning techniques. The developed SLMs were evaluated empirically with diverse datasets. The results showed that the proposed SLM approach to deception detection outperformed a state-of-the-art text categorization method as well as traditional feature-based methods.",
"title": ""
},
{
"docid": "d09e4f8c58f9ff0760addfe1e313d5f6",
"text": "Currently, color image encryption is important to ensure its confidentiality during its transmission on insecure networks or its storage. The fact that chaotic properties are related with cryptography properties in confusion, diffusion, pseudorandom, etc., researchers around the world have presented several image (gray and color) encryption algorithms based on chaos, but almost all them with serious security problems have been broken with the powerful chosen/known plain image attack. In this work, we present a color image encryption algorithm based on total plain image characteristics (to resist a chosen/known plain image attack), and 1D logistic map with optimized distribution (for fast encryption process) based on Murillo-Escobar's algorithm (Murillo-Escobar et al. (2014) [38]). The security analysis confirms that the RGB image encryption is fast and secure against several known attacks; therefore, it can be implemented in real-time applications where a high security is required. & 2014 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "0f555a4c2415b6a5995905f1594871d4",
"text": "With the ultimate intent of improving the quality of life, identification of human's affective states on the collected electroencephalogram (EEG) has attracted lots of attention recently. In this domain, the existing methods usually use only a few labeled samples to classify affective states consisting of over thousands of features. Therefore, important information may not be well utilized and performance is lowered due to the randomness caused by the small sample problem. However, this issue has rarely been discussed in the previous studies. Besides, many EEG channels are irrelevant to the specific learning tasks, which introduce lots of noise to the systems and further lower the performance in the recognition of affective states. To address these two challenges, in this paper, we propose a novel Deep Belief Networks (DBN) based model for affective state recognition from EEG signals. Specifically, signals from each EEG channel are firstly processed with a DBN for effectively extracting critical information from the over thousands of features. The extracted low dimensional characteristics are then utilized in the learning to avoid the small sample problem. For the noisy channel problem, a novel stimulus-response model is proposed. The optimal channel set is obtained according to the response rate of each channel. Finally, a supervised Restricted Boltzmann Machine (RBM) is applied on the combined low dimensional characteristics from the optimal EEG channels. To evaluate the performance of the proposed Supervised DBN based Affective State Recognition (SDA) model, we implement it on the Deap Dataset and compare it with five baselines. Extensive experimental results show that the proposed algorithm can successfully handle the aforementioned two challenges and significantly outperform the baselines by 11.5% to 24.4%, which validates the effectiveness of the proposed algorithm in the task of affective state recognition.",
"title": ""
},
{
"docid": "8fd049da24568dea2227483415532f9b",
"text": "The notion of “semiotic scaffolding”, introduced into the semiotic discussions by Jesper Hoffmeyer in December of 2000, is proving to be one of the single most important concepts for the development of semiotics as we seek to understand the full extent of semiosis and the dependence of evolution, particularly in the living world, thereon. I say “particularly in the living world”, because there has been from the first a stubborn resistance among semioticians to seeing how a semiosis prior to and/or independent of living beings is possible. Yet the universe began in a state not only lifeless but incapable of supporting life, and somehow “moved” from there in the direction of being able to sustain life and finally of actually doing so. Wherever dyadic interactions result indirectly in a new condition that either moves the universe closer to being able to sustain life, or moves life itself in the direction not merely of sustaining itself but opening the way to new forms of life, we encounter a “thirdness” in nature of exactly the sort that semiosic triadicity alone can explain. This is the process, both within and without the living world, that requires scaffolding. This essay argues that a fuller understanding of this concept shows why “semiosis” says clearly what “evolution” says obscurely.",
"title": ""
},
{
"docid": "6f9186944cdeab30da7a530a942a5b3d",
"text": "In this work, we perform a comparative analysis of the impact of substrate technologies on the performance of 28 GHz antennas for 5G applications. For this purpose, we model, simulate, analyze and compare 2×2 patch antenna arrays on five substrate technologies typically used for manufacturing integrated antennas. The impact of these substrates on the impedance bandwidth, efficiency and gain of the antennas is quantified. Finally, the antennas are fabricated and measured. Excellent correlation is obtained between measurement and simulation results.",
"title": ""
},
{
"docid": "b8dcf30712528af93cb43c5960435464",
"text": "The first clinical description of Parkinson's disease (PD) will embrace its two century anniversary in 2017. For the past 30 years, mitochondrial dysfunction has been hypothesized to play a central role in the pathobiology of this devastating neurodegenerative disease. The identifications of mutations in genes encoding PINK1 (PTEN-induced kinase 1) and Parkin (E3 ubiquitin ligase) in familial PD and their functional association with mitochondrial quality control provided further support to this hypothesis. Recent research focused mainly on their key involvement in the clearance of damaged mitochondria, a process known as mitophagy. It has become evident that there are many other aspects of this complex regulated, multifaceted pathway that provides neuroprotection. As such, numerous additional factors that impact PINK1/Parkin have already been identified including genes involved in other forms of PD. A great pathogenic overlap amongst different forms of familial, environmental and even sporadic disease is emerging that potentially converges at the level of mitochondrial quality control. Tremendous efforts now seek to further detail the roles and exploit PINK1 and Parkin, their upstream regulators and downstream signaling pathways for future translation. This review summarizes the latest findings on PINK1/Parkin-directed mitochondrial quality control, its integration and cross-talk with other disease factors and pathways as well as the implications for idiopathic PD. In addition, we highlight novel avenues for the development of biomarkers and disease-modifying therapies that are based on a detailed understanding of the PINK1/Parkin pathway.",
"title": ""
},
{
"docid": "219a90eb2fd03cd6cc5d89fda740d409",
"text": "The general problem of computing poste rior probabilities in Bayesian networks is NP hard Cooper However e cient algorithms are often possible for particular applications by exploiting problem struc tures It is well understood that the key to the materialization of such a possibil ity is to make use of conditional indepen dence and work with factorizations of joint probabilities rather than joint probabilities themselves Di erent exact approaches can be characterized in terms of their choices of factorizations We propose a new approach which adopts a straightforward way for fac torizing joint probabilities In comparison with the clique tree propagation approach our approach is very simple It allows the pruning of irrelevant variables it accommo dates changes to the knowledge base more easily it is easier to implement More importantly it can be adapted to utilize both intercausal independence and condi tional independence in one uniform frame work On the other hand clique tree prop agation is better in terms of facilitating pre computations",
"title": ""
},
{
"docid": "579db3cec4e49d53090ee13f35385c35",
"text": "In cloud computing environments, multiple tenants are often co-located on the same multi-processor system. Thus, preventing information leakage between tenants is crucial. While the hypervisor enforces software isolation, shared hardware, such as the CPU cache or memory bus, can leak sensitive information. For security reasons, shared memory between tenants is typically disabled. Furthermore, tenants often do not share a physical CPU. In this setting, cache attacks do not work and only a slow cross-CPU covert channel over the memory bus is known. In contrast, we demonstrate a high-speed covert channel as well as the first side-channel attack working across processors and without any shared memory. To build these attacks, we use the undocumented DRAM address mappings. We present two methods to reverse engineer the mapping of memory addresses to DRAM channels, ranks, and banks. One uses physical probing of the memory bus, the other runs entirely in software and is fully automated. Using this mapping, we introduce DRAMA attacks, a novel class of attacks that exploit the DRAM row buffer that is shared, even in multi-processor systems. Thus, our attacks work in the most restrictive environments. First, we build a covert channel with a capacity of up to 2 Mbps, which is three to four orders of magnitude faster than memory-bus-based channels. Second, we build a side-channel template attack that can automatically locate and monitor memory accesses. Third, we show how using the DRAM mappings improves existing attacks and in particular enables practical Rowhammer attacks on DDR4.",
"title": ""
},
{
"docid": "5571389dcc25cbcd9c68517934adce1d",
"text": "The polysaccharide-containing extracellular fractions (EFs) of the edible mushroom Pleurotus ostreatus have immunomodulating effects. Being aware of these therapeutic effects of mushroom extracts, we have investigated the synergistic relations between these extracts and BIAVAC and BIAROMVAC vaccines. These vaccines target the stimulation of the immune system in commercial poultry, which are extremely vulnerable in the first days of their lives. By administrating EF with polysaccharides from P. ostreatus to unvaccinated broilers we have noticed slow stimulation of maternal antibodies against infectious bursal disease (IBD) starting from four weeks post hatching. For the broilers vaccinated with BIAVAC and BIAROMVAC vaccines a low to almost complete lack of IBD maternal antibodies has been recorded. By adding 5% and 15% EF in the water intake, as compared to the reaction of the immune system in the previous experiment, the level of IBD antibodies was increased. This has led us to believe that by using this combination of BIAVAC and BIAROMVAC vaccine and EF from P. ostreatus we can obtain good results in stimulating the production of IBD antibodies in the period of the chicken first days of life, which are critical to broilers' survival. This can be rationalized by the newly proposed reactivity biological activity (ReBiAc) principles by examining the parabolic relationship between EF administration and recorded biological activity.",
"title": ""
},
{
"docid": "a702269cd9fce037f2f74f895595d573",
"text": "This paper tackles the reduction of redundant repeating generation that is often observed in RNN-based encoder-decoder models. Our basic idea is to jointly estimate the upper-bound frequency of each target vocabulary in the encoder and control the output words based on the estimation in the decoder. Our method shows significant improvement over a strong RNN-based encoder-decoder baseline and achieved its best results on an abstractive summarization benchmark.",
"title": ""
},
{
"docid": "b789785d7e9cdde760af1d65faccfa60",
"text": "The use of an expired product may cause harm to its designated target. If the product is for human consumption, e.g. medicine, the result can be fatal. While most people can check the expiration date easily before using the product, it is very difficult for a visually impaired or a totally blind person to do so independently. This paper therefore proposes a solution that helps the visually impaired to identify a product and subsequently `read' the expiration date on a product using a handheld Smartphone. While there are a few commercial barcode decoder and text recognition applications for the mobile phone, they require the user to point the phone to the correct location - which is extremely hard for the visually impaired. We thus focus our research on helping the blind user to locate the barcode and the expiration date on a product package. After that, existing barcode decoding and OCR algorithms can be utilized to obtain the required information. A field trial with several bind- folded/totally-blind participants is conducted and shows that the proposed solution is effective in guiding a visually impaired user towards the barcode and expiry information, although some issues remain with the reliability of the off-the-shelf decoding algorithms on low-resolution videos.",
"title": ""
},
{
"docid": "599fb363d80fd1a7a6faaccbde3ecbb5",
"text": "In this survey a new application paradigm life and safety for critical operations and missions using wearable Wireless Body Area Networks (WBANs) technology is introduced. This paradigm has a vast scope of applications, including disaster management, worker safety in harsh environments such as roadside and building workers, mobile health monitoring, ambient assisted living and many more. It is often the case that during the critical operations and the target conditions, the existing infrastructure is either absent, damaged or overcrowded. In this context, it is envisioned that WBANs will enable the quick deployment of ad-hoc/on-the-fly communication networks to help save many lives and ensuring people's safety. However, to understand the applications more deeply and their specific characteristics and requirements, this survey presents a comprehensive study on the applications scenarios, their context and specific requirements. It explores details of the key enabling standards, existing state-of-the-art research studies, and projects to understand their limitations before realizing aforementioned applications. Application-specific challenges and issues are discussed comprehensively from various perspectives and future research and development directions are highlighted as an inspiration for new innovative solutions. To conclude, this survey opens up a good opportunity for companies and research centers to investigate old but still new problems, in the realm of wearable technologies, which are increasingly evolving and getting more and more attention recently.",
"title": ""
},
{
"docid": "748996944ebd52a7d82c5ca19b90656b",
"text": "The experiment was conducted with three biofloc treatments and one control in triplicate in 500 L capacity indoor tanks. Biofloc tanks, filled with 350 L of water, were fed with sugarcane molasses (BFTS), tapioca flour (BFTT), wheat flour (BFTW) and clean water as control without biofloc and allowed to stand for 30 days. The postlarvae of Litopenaeus vannamei (Boone, 1931) with an Average body weight of 0.15 0.02 g were stocked at the rate of 130 PL m 2 and cultured for a period of 60 days fed with pelleted feed at the rate of 1.5% of biomass. The total suspended solids (TSS) level was maintained at around 500 mg L 1 in BFT tanks. The addition of carbohydrate significantly reduced the total ammoniaN (TAN), nitrite-N and nitrate-N in water and it significantly increased the total heterotrophic bacteria (THB) population in the biofloc treatments. There was a significant difference in the final average body weight (8.49 0.09 g) in the wheat flour treatment (BFTW) than those treatment and control group of the shrimp. Survival of the shrimps was not affected by the treatments and ranged between 82.02% and 90.3%. The proximate and chemical composition of biofloc and proximate composition of the shrimp was significantly different between the biofloc treatments and control. Tintinids, ciliates, copepods, cyanobacteria and nematodes were identified in all the biofloc treatments, nematodes being the most dominant group of organisms in the biofloc. It could be concluded that the use of wheat flour (BFTW) effectively enhanced the biofloc production and contributed towards better water quality which resulted in higher production of shrimp.",
"title": ""
},
{
"docid": "f65e55d992bff2ce881aaf197a734adf",
"text": "hypervisor as a nondeterministic sequential program prove invariant properties of individual ϋobjects and compose them 14 Phase1 Startup Phase2 Intercept Phase3 Exception Proofs HW initiated concurrent execution Concurrent execution HW initiated sequential execution Sequential execution Intro. Motivating. Ex. Impl. Verif. Results Perf. Concl. Architecture",
"title": ""
},
{
"docid": "302a838f1a94596d37693363abcf1978",
"text": "In this paper we present a method for organizing and indexing logo digital libraries like the ones of the patent and trademark offices. We propose an efficient queried-by-example retrieval system which is able to retrieve logos by similarity from large databases of logo images. Logos are compactly described by a variant of the shape context descriptor. These descriptors are then indexed by a locality-sensitive hashing data structure aiming to perform approximate k-NN search in high dimensional spaces in sub-linear time. The experiments demonstrate the effectiveness and efficiency of this system on realistic datasets as the Tobacco-800 logo database.",
"title": ""
},
{
"docid": "6aee06316a24005ee2f8f4f1906e2692",
"text": "Sir, The origin of vestibular papillomatosis (VP) is controversial. VP describes the condition of multiple papillae that may cover the entire surface of the vestibule (1). Our literature search for vestibular papillomatosis revealed 13 reports in gynaecological journals and only one in a dermatological journal. Furthermore, searching for vulvar squamous papillomatosis revealed 6 reports in gynaecological journals and again only one in a dermatological journal. We therefore conclude that it is worthwhile drawing the attention of dermatologists to this entity.",
"title": ""
},
{
"docid": "a6acba54f34d1d101f4abb00f4fe4675",
"text": "We study the potential flow of information in interaction networks, that is, networks in which the interactions between the nodes are being recorded. The central notion in our study is that of an information channel. An information channel is a sequence of interactions between nodes forming a path in the network which respects the time order. As such, an information channel represents a potential way information could have flown in the interaction network. We propose algorithms to estimate information channels of limited time span from every node to other nodes in the network. We present one exact and one more efficient approximate algorithm. Both algorithms are onepass algorithms. The approximation algorithm is based on an adaptation of the HyperLogLog sketch, which allows easily combining the sketches of individual nodes in order to get estimates of how many unique nodes can be reached from groups of nodes as well. We show how the results of our algorithm can be used to build efficient influence oracles for solving the Influence maximization problem which deals with finding top k seed nodes such that the information spread from these nodes is maximized. Experiments show that the use of information channels is an interesting data-driven and model-independent way to find top k influential nodes in interaction networks.",
"title": ""
},
{
"docid": "9a9dc194e0ca7d1bb825e8aed5c9b4fe",
"text": "In this paper we show how to divide data <italic>D</italic> into <italic>n</italic> pieces in such a way that <italic>D</italic> is easily reconstructable from any <italic>k</italic> pieces, but even complete knowledge of <italic>k</italic> - 1 pieces reveals absolutely no information about <italic>D</italic>. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.",
"title": ""
},
{
"docid": "bd32bda2e79d28122f424ec4966cde15",
"text": "This paper holds a survey on plant leaf diseases classification using image processing. Digital image processing has three basic steps: image processing, analysis and understanding. Image processing contains the preprocessing of the plant leaf as segmentation, color extraction, diseases specific data extraction and filtration of images. Image analysis generally deals with the classification of diseases. Plant leaf can be classified based on their morphological features with the help of various classification techniques such as PCA, SVM, and Neural Network. These classifications can be defined various properties of the plant leaf such as color, intensity, dimensions. Back propagation is most commonly used neural network. It has many learning, training, transfer functions which is used to construct various BP networks. Characteristics features are the performance parameter for image recognition. BP networks shows very good results in classification of the grapes leaf diseases. This paper provides an overview on different image processing techniques along with BP Networks used in leaf disease classification.",
"title": ""
},
{
"docid": "5ce93a1c09b4da41f0cc920d5c7e6bdc",
"text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.",
"title": ""
}
] |
scidocsrr
|
26cfba9a362043885fecfdd3a039ac00
|
Edge Computing Architecture for Mobile Crowdsensing
|
[
{
"docid": "b34af4da147779c6d1505ff12cacd5aa",
"text": "Crowd-enabled place-centric systems gather and reason over large mobile sensor datasets and target everyday user locations (such as stores, workplaces, and restaurants). Such systems are transforming various consumer services (for example, local search) and data-driven organizations (city planning). As the demand for these systems increases, our understanding of how to design and deploy successful crowdsensing systems must improve. In this paper, we present a systematic study of the coverage and scaling properties of place-centric crowdsensing. During a two-month deployment, we collected smartphone sensor data from 85 participants using a representative crowdsensing system that captures 48,000 different place visits. Our analysis of this dataset examines issues of core interest to place-centric crowdsensing, including place-temporal coverage, the relationship between the user population and coverage, privacy concerns, and the characterization of the collected data. Collectively, our findings provide valuable insights to guide the building of future place-centric crowdsensing systems and applications.",
"title": ""
},
{
"docid": "cb00fba4374d845da2f7e18c421b07df",
"text": "The Internet of Things (IoT) is a new paradigm that combines aspects and technologies coming from different approaches. Ubiquitous computing, pervasive computing, Internet Protocol, sensing technologies, communication technologies, and embedded devices are merged together in order to form a system where the real and digital worlds meet and are continuously in symbiotic interaction. The smart object is the building block of the IoT vision. By putting intelligence into everyday objects, they are turned into smart objects able not only to collect information from the environment and interact/control the physical world, but also to be interconnected, to each other, through Internet to exchange data and information. The expected huge number of interconnected devices and the significant amount of available data open new opportunities to create services that will bring tangible benefits to the society, environment, economy and individual citizens. In this paper we present the key features and the driver technologies of IoT. In addition to identifying the application scenarios and the correspondent potential applications, we focus on research challenges and open issues to be faced for the IoT realization in the real world.",
"title": ""
},
{
"docid": "802de1032f66e3e10a712fadb07ef432",
"text": "In this article, we provided a tutorial on MEC technology and an overview of the MEC framework and architecture recently defined by the ETSI MEC ISG standardization group. We described some examples of MEC deployment, with special reference to IoT uses since the IoT is recognized as a main driver for 5G. After having also discussed benefits and challenges for MEC toward 5G, we can say that MEC has definitely a window of opportunity to contribute to the creation of a common layer of integration for the IoT world. One of the main questions still open is: How will this technology coexist with LTE advanced pro and the future 5G network? For this aspect, we foresee the need for very strong cooperation between 3GPP and ETSI (e.g., NFV and possibly other SDOs) to avoid unnecessary duplication in the standard. In this sense, MEC could pave the way and be natively integrated in the network of tomorrow.",
"title": ""
}
] |
[
{
"docid": "653ca5c9478b1b1487fc24eeea8c1677",
"text": "A fundamental question in information theory and in computer science is how to measure similarity or the amount of shared information between two sequences. We have proposed a metric, based on Kolmogorov complexity, to answer this question and have proven it to be universal. We apply this metric in measuring the amount of shared information between two computer programs, to enable plagiarism detection. We have designed and implemented a practical system SID (Software Integrity Diagnosis system) that approximates this metric by a heuristic compression algorithm. Experimental results demonstrate that SID has clear advantages over other plagiarism detection systems. SID system server is online at http://software.bioinformatics.uwaterloo.ca/SID/.",
"title": ""
},
{
"docid": "2f92cde5a194a4cabdebebe2c7cc11ba",
"text": "The expressive power of neural networks is important for understanding deep learning. Most existing works consider this problem from the view of the depth of a network. In this paper, we study how width affects the expressiveness of neural networks. Classical results state that depth-bounded (e.g. depth-2) networks with suitable activation functions are universal approximators. We show a universal approximation theorem for width-bounded ReLU networks: width-(n+ 4) ReLU networks, where n is the input dimension, are universal approximators. Moreover, except for a measure zero set, all functions cannot be approximated by width-n ReLU networks, which exhibits a phase transition. Several recent works demonstrate the benefits of depth by proving the depth-efficiency of neural networks. That is, there are classes of deep networks which cannot be realized by any shallow network whose size is no more than an exponential bound. Here we pose the dual question on the width-efficiency of ReLU networks: Are there wide networks that cannot be realized by narrow networks whose size is not substantially larger? We show that there exist classes of wide networks which cannot be realized by any narrow network whose depth is no more than a polynomial bound. On the other hand, we demonstrate by extensive experiments that narrow networks whose size exceed the polynomial bound by a constant factor can approximate wide and shallow network with high accuracy. Our results provide more comprehensive evidence that depth may be more effective than width for the expressiveness of ReLU networks.",
"title": ""
},
{
"docid": "a652eb10bf8f15855f9ac1f1981dc07f",
"text": "n = 379) were jail inmates at the time of ingestion, 22.9% ( n = 124) had a history of psychosis, and 7.2% ( n = 39) were alcoholics or denture-wearing elderly subjects. Most foreign bodies passed spontaneously (75.6%; n = 410). Endoscopic removal was possible in 19.5% ( n = 106) and was not associated with any morbidity. Only 4.8% ( n = 26) required surgery. Of the latter, 30.8% ( n = 8) had long gastric FBs with no tendency for distal passage and were removed via gastrotomy; 15.4% ( n = 4) had thin, sharp FBs, causing perforation; and 53.8% ( n = 14) had FBs impacted in the ileocecal region, which were removed via appendicostomy. Conservative approach to FB ingestion is justified, although early endoscopic removal from the stomach is recommended. In cases of failure, surgical removal for gastric FBs longer than 7.0 cm is wise. Thin, sharp FBs require a high index of suspicion because they carry a higher risk for perforation. The ileocecal region is the most common site of impaction. Removal of the FB via appendicostomy is the safest option and should not be delayed more than 48 hours.",
"title": ""
},
{
"docid": "2da528d39b8815bcbb9a8aaf20d94926",
"text": "Collaborative filtering (CF) is out of question the most widely adopted and successful recommendation approach. A typical CF-based recommender system associates a user with a group of like-minded users based on their individual preferences over all the items, either explicit or implicit, and then recommends to the user some unobserved items enjoyed by the group. However, we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more reasonable to predict preferences through one user's correlated subgroups, but not the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate a new Multiclass Co-Clustering (MCoC) model, which captures relations of user-to-item, user-to-user, and item-to-item simultaneously. Then, we combine traditional CF algorithms with subgroups for improving their top- <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"cai-ieq1-2566622.gif\"/></alternatives></inline-formula> recommendation performance. Our approach can be seen as a new extension of traditional clustering CF models. Systematic experiments on several real data sets have demonstrated the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "32c068c8341ae0ff12556050bb8f526d",
"text": "In this paper, we assess the challenges for multi-domain, multi-lingual question answering, create necessary resources for benchmarking and develop a baseline model. We curate 500 articles in six different domains from the web. These articles form a comparable corpora of 250 English documents and 250 Hindi documents. From these comparable corpora, we have created 5, 495 question-answer pairs with the questions and answers, both being in English and Hindi. The question can be both factoid or short descriptive types. The answers are categorized in 6 coarse and 63 finer types. To the best of our knowledge, this is the very first attempt towards creating multi-domain, multi-lingual question answering evaluation involving English and Hindi. We develop a deep learning based model for classifying an input question into the coarse and finer categories depending upon the expected answer. Answers are extracted through similarity computation and subsequent ranking. For factoid question, we obtain an MRR value of 49.10% and for short descriptive question, we obtain a BLEU score of 41.37%. Evaluation of question classification model shows the accuracies of 90.12% and 80.30% for coarse and finer classes, respectively.",
"title": ""
},
{
"docid": "0a51a9bd6021a8a0a7c6783dffedff06",
"text": "Classification of music genre has been an inspiring job in the area of music information retrieval (MIR). Classification of genre can be valuable to explain some actual interesting problems such as creating song references, finding related songs, finding societies who will like that specific song. The purpose of our research is to find best machine learning algorithm that predict the genre of songs using k-nearest neighbor (k-NN) and Support Vector Machine (SVM). This paper also presents comparative analysis between k-nearest neighbor (k-NN) and Support Vector Machine (SVM) with dimensionality return and then without dimensionality reduction via principal component analysis (PCA). The Mel Frequency Cepstral Coefficients (MFCC) is used to extract information for the data set. In addition, the MFCC features are used for individual tracks. From results we found that without the dimensionality reduction both k-nearest neighbor and Support Vector Machine (SVM) gave more accurate results compare to the results with dimensionality reduction. Overall the Support Vector Machine (SVM) is much more effective classifier for classification of music genre. It gave an overall accuracy of 77%. Keywords—K-nearest neighbor (k-NN); Support Vector Machine (SVM); music; genre; classification; features; Mel Frequency Cepstral Coefficients (MFCC); principal component analysis (PCA)",
"title": ""
},
{
"docid": "4250ae1e0b2c662b98171acaeaa35028",
"text": "For many applications in Urban Search and Rescue (USAR) scenarios robots need to learn a map of unknown environments. We present a system for fast online learning of occupancy grid maps requiring low computational resources. It combines a robust scan matching approach using a LIDAR system with a 3D attitude estimation system based on inertial sensing. By using a fast approximation of map gradients and a multi-resolution grid, reliable localization and mapping capabilities in a variety of challenging environments are realized. Multiple datasets showing the applicability in an embedded hand-held mapping system are provided. We show that the system is sufficiently accurate as to not require explicit loop closing techniques in the considered scenarios. The software is available as an open source package for ROS.",
"title": ""
},
{
"docid": "8f444ac95ff664e06e1194dd096e4f31",
"text": "Entity alignment aims to link entities and their counterparts among multiple knowledge graphs (KGs). Most existing methods typically rely on external information of entities such as Wikipedia links and require costly manual feature construction to complete alignment. In this paper, we present a novel approach for entity alignment via joint knowledge embeddings. Our method jointly encodes both entities and relations of various KGs into a unified low-dimensional semantic space according to a small seed set of aligned entities. During this process, we can align entities according to their semantic distance in this joint semantic space. More specifically, we present an iterative and parameter sharing method to improve alignment performance. Experiment results on realworld datasets show that, as compared to baselines, our method achieves significant improvements on entity alignment, and can further improve knowledge graph completion performance on various KGs with the favor of joint knowledge embeddings.",
"title": ""
},
{
"docid": "70781625aa7e95af8fc9e092f0b2c469",
"text": "Software Defined Networking (SDN) provides opportunities for network verification and debugging by offering centralized visibility of the data plane. This has enabled both offline and online data-plane verification. However, little work has gone into the verification of time-varying properties (e.g., dynamic access control), where verification conditions change dynamically in response to application logic, network events, and external stimulus (e.g., operator requests).\n This paper introduces an assertion language to support verifying and debugging SDN applications with dynamically changing verification conditions. The language allows programmers to annotate controller applications with C-style assertions about the data plane. Assertions consist of regular expressions on paths to describe path properties for classes of packets, and universal and existential quantifiers that range over programmer-defined sets of hosts, switches, or other network entities. As controller programs dynamically add and remove elements from these sets, they generate new verification conditions that the existing data plane must satisfy. This work proposes an incremental data structure together with an underlying verification engine, to avoid naively re-verifying the entire data plane as these verification conditions change. To validate our ideas, we have implemented a debugging library on top of a modified version of VeriFlow, which is easily integrated into existing controller systems with minimal changes. Using this library, we have verified correctness properties for applications on several controller platforms.",
"title": ""
},
{
"docid": "f8724f8166eeb48461f9f4ac8fdd87d3",
"text": "The simultaneous use of images from different spectra can be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.",
"title": ""
},
{
"docid": "c021904cff1cbef8ab62cc3fe0502a7e",
"text": "Light-emitting diodes (LEDs), which will be increasingly used in lighting technology, will also allow for distribution of broadband optical wireless signals. Visible-light communication (VLC) using white LEDs offers several advantages over the RF-based wireless systems, i.e., license-free spectrum, low power consumption, and higher privacy. Mostly, optical wireless can provide much higher data rates. In this paper, we demonstrate a VLC system based on a white LED for indoor broadband wireless access. After investigating the nonlinear effects of the LED and the power amplifier, a data rate of 1 Gb/s has been achieved at the standard illuminance level, by using an optimized discrete multitone modulation technique and adaptive bit- and power-loading algorithms. The bit-error ratio of the received data was $1.5\\cdot 10^{-3}$, which is within the limit of common forward error correction (FEC) coding. These results twice the highest capacity that had been previously obtained.",
"title": ""
},
{
"docid": "772fc1cf2dd2837227facd31f897dba3",
"text": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-α (transentorhinal stages I–II). The two forms of limbic stages (stages III–IV) were marked by a conspicuous affection of layer Pre-α in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V–VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations.",
"title": ""
},
{
"docid": "deccc92276cca4d064b0161fd8ee7dd9",
"text": "Vast amount of information is available on web. Data analysis applications such as extracting mutual funds information from a website, daily extracting opening and closing price of stock from a web page involves web data extraction. Huge efforts are made by lots of researchers to automate the process of web data scraping. Lots of techniques depends on the structure of web page i.e. html structure or DOM tree structure to scrap data from web page. In this paper we are presenting survey of HTML aware web scrapping techniques. Keywords— DOM Tree, HTML structure, semi structured web pages, web scrapping and Web data extraction.",
"title": ""
},
{
"docid": "055a7be9623e794168b858e41bceaabd",
"text": "Lexical Pragmatics is a research field that tries to give a systematic and explanatory account of pragmatic phenomena that are connected with the semantic underspecification of lexical items. Cases in point are the pragmatics of adjectives, systematic polysemy, the distribution of lexical and productive causatives, blocking phenomena, the interpretation of compounds, and many phenomena presently discussed within the framework of Cognitive Semantics. The approach combines a constrained-based semantics with a general mechanism of conversational implicature. The basic pragmatic mechanism rests on conditions of updating the common ground and allows to give a precise explication of notions as generalized conversational implicature and pragmatic anomaly. The fruitfulness of the basic account is established by its application to a variety of recalcitrant phenomena among which its precise treatment of Atlas & Levinson's Qand I-principles and the formalization of the balance between informativeness and efficiency in natural language processing (Horn's division of pragmatic labor) deserve particular mention. The basic mechanism is subsequently extended by an abductive reasoning system which is guided by subjective probability. The extended mechanism turned out to be capable of giving a principled account of lexical blocking, the pragmatics of adjectives, and systematic polysemy.",
"title": ""
},
{
"docid": "4f355aa038e56b9449181eb780e05484",
"text": "Composite indices or pooled indices are useful tools for the evaluation of disease activity in patients with rheumatoid arthritis (RA). They allow the integration of various aspects of the disease into a single numerical value, and may therefore facilitate consistent patient care and improve patient compliance, which both can lead to improved outcomes. The Simplified Disease Activity Index (SDAI) and the Clinical Disease Activity Index (CDAI) are two new tools for the evaluation of disease activity in RA. They have been developed to provide physicians and patients with simple and more comprehensible instruments. Moreover, the CDAI is the only composite index that does not incorporate an acute phase response and can therefore be used to conduct a disease activity evaluation essentially anytime and anywhere. These two new tools have not been developed to replace currently available instruments such as the DAS28, but rather to provide options for different environments. The comparative construct, content, and discriminant validity of all three indices--the DAS28, the SDAI, and the CDAI--allow physicians to base their choice of instrument on their infrastructure and their needs, and all of them can also be used in clinical trials.",
"title": ""
},
{
"docid": "ef09bc08cc8e94275e652e818a0af97f",
"text": "The biosynthetic pathway of L-tartaric acid, the form most commonly encountered in nature, and its catabolic ties to vitamin C, remain a challenge to plant scientists. Vitamin C and L-tartaric acid are plant-derived metabolites with intrinsic human value. In contrast to most fruits during development, grapes accumulate L-tartaric acid, which remains within the berry throughout ripening. Berry taste and the organoleptic properties and aging potential of wines are intimately linked to levels of L-tartaric acid present in the fruit, and those added during vinification. Elucidation of the reactions relating L-tartaric acid to vitamin C catabolism in the Vitaceae showed that they proceed via the oxidation of L-idonic acid, the proposed rate-limiting step in the pathway. Here we report the use of transcript and metabolite profiling to identify candidate cDNAs from genes expressed at developmental times and in tissues appropriate for L-tartaric acid biosynthesis in grape berries. Enzymological analyses of one candidate confirmed its activity in the proposed rate-limiting step of the direct pathway from vitamin C to tartaric acid in higher plants. Surveying organic acid content in Vitis and related genera, we have identified a non-tartrate-forming species in which this gene is deleted. This species accumulates in excess of three times the levels of vitamin C than comparably ripe berries of tartrate-accumulating species, suggesting that modulation of tartaric acid biosynthesis may provide a rational basis for the production of grapes rich in vitamin C.",
"title": ""
},
{
"docid": "be8e1e4fd9b8ddb0fc7e1364455999e8",
"text": "In this paper, we describe the development and exploitation of a corpus-based tool for the identification of metaphorical patterns in large datasets. The analysis of metaphor as a cognitive and cultural, rather than solely linguistic, phenomenon has become central as metaphor researchers working within ‘Cognitive Metaphor Theory’ have drawn attention to the presence of systematic and pervasive conventional metaphorical patterns in ‘ordinary’ language (e.g. I’m at a crossroads in my life). Cognitive Metaphor Theory suggests that these linguistic patterns reflect the existence of conventional conceptual metaphors, namely systematic cross-domain correspondences in conceptual structure (e.g. LIFE IS A JOURNEY). This theoretical approach, described further in section 2, has led to considerable advances in our understanding of metaphor both as a linguistic device and a cognitive model, and to our awareness of its role in many different genres and discourses. Although some recent research has incorporated corpus linguistic techniques into this framework for the analysis of metaphor, to date, such analyses have primarily involved the concordancing of pre-selected search strings (e.g. Deignan 2005). The method described in this paper represents an attempt to extend the limits of this form of analysis. In our approach, we have applied an existing semantic field annotation tool (USAS) developed at Lancaster University to aid metaphor researchers in searching corpora. We are able to filter all possible candidate semantic fields proposed by USAS to assist in finding possible ‘source’ (e.g. JOURNEY) and ‘target’ (e.g. LIFE) domains, and we can then go on to consider the potential metaphoricity of the expressions included under each possible source domain. This method thus enables us to identify open-ended sets of metaphorical expressions, which are not limited to predetermined search strings. In section 3, we present this emerging methodology for the computer-assisted analysis of metaphorical patterns in discourse. The semantic fields automatically annotated by USAS can be seen as roughly corresponding to the domains of metaphor theory. We have used USAS in combination with key word and domain techniques in Wmatrix (Rayson, 2003) to replicate earlier manual analyses, e.g. machine metaphors in Ken Kesey’s One Flew Over the Cuckoo’s Nest (Semino and Swindlehurst, 1996) and war, machine and organism metaphors in business magazines (Koller, 2004a). These studies are described in section 4.",
"title": ""
},
{
"docid": "e10319d1eb6dd93fe0d98b6d3303efe9",
"text": "This paper presents a novel fast optical flow estimation algorithm and its application to real-time obstacle avoidance of a guide-dog robot. The function of the laboratory-developed robot is to help blind or visually impaired pedestrians to move safely among obstacles. The proposed algorithm features a combination of the conventional correlation-based principle and the differential-based method for optical flow estimation. Employing image intensity gradients as features for pattern matching, we set up a brightness constraint to configure the search area. The merit of this scheme is that the computation load can be greatly reduced and in the mean time the possibility of estimation error is decreased. The vision system has been established on board the robot to provide depth information of the immediate environment. The depth data are transformed to a safety distribution histogram and used for real-time obstacle avoidance. Experimental results demonstrate that the proposed method is effective for a guidance robot in a dynamic environment.",
"title": ""
},
{
"docid": "3d2e170b4cd31d0e1a28c968f0b75cf6",
"text": "Fog Computing is a new variety of the cloud computing paradigm that brings virtualized cloud services to the edge of the network to control the devices in the IoT. We present a pattern for fog computing which describes its architecture, including its computing, storage and networking services. Fog computing is implemented as an intermediate platform between end devices and cloud computing data centers. The recent popularity of the Internet of Things (IoT) has made fog computing a necessity to handle a variety of devices. It has been recognized as an important platform to provide efficient, location aware, close to the edge, cloud services. Our model includes most of the functionality found in current fog architectures.",
"title": ""
},
{
"docid": "2ee5e5ecd9304066b12771f3349155f8",
"text": "An intelligent wiper speed adjustment system can be found in most middle and upper class cars. A core piece of this gadget is the rain sensor on the windshield. With the upcoming number of cars being equipped with an in-vehicle camera for vision-based applications the call for integrating all sensors in the area of the rearview mirror into one device rises to reduce the number of parts and variants. In this paper, functionality of standard rain sensors and different vision-based approaches are explained and a novel rain sensing concept based on an automotive in-vehicle camera for Driver Assistance Systems (DAS) is developed to enhance applicability. Hereby, the region at the bottom of the field of view (FOV) of the imager is used to detect raindrops, while the upper part of the image is still usable for other vision-based applications. A simple algorithm is set up to keep the additional processing time low and to quantitatively gather the rain intensity. Mechanisms to avoid false activations of the wipers are introduced. First experimental experiences based on real scenarios show promising results.",
"title": ""
}
] |
scidocsrr
|
838ee5257f993f5488cf7c0c65ebeb2c
|
Measuring User Credibility in Social Media
|
[
{
"docid": "51d950dfb9f71b9c8948198c147b9884",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
}
] |
[
{
"docid": "4a741431c708cd92a250bcb91e4f1638",
"text": "PURPOSE\nIn today's workplace, nurses are highly skilled professionals possessing expertise in both information technology and nursing. Nursing informatics competencies are recognized as an important capability of nurses. No established guidelines existed for nurses in Asia. This study focused on identifying the nursing informatics competencies required of nurses in Taiwan.\n\n\nMETHODS\nA modified Web-based Delphi method was used for two expert groups in nursing, educators and administrators. Experts responded to 323 items on the Nursing Informatics Competencies Questionnaire, modified from the initial work of Staggers, Gassert and Curran to include 45 additional items. Three Web-based Delphi rounds were conducted. Analysis included detailed item analysis. Competencies that met 60% or greater agreement of item importance and appropriate level of nursing practice were included.\n\n\nRESULTS\nN=32 experts agreed to participate in Round 1, 23 nursing educators and 9 administrators. The participation rates for Rounds 2 and 3=68.8%. By Round 3, 318 of 323 nursing informatics competencies achieved required consensus levels. Of the new competencies, 42 of 45 were validated. A high degree of agreement existed for specific nursing informatics competencies required for nurses in Taiwan (97.8%).\n\n\nCONCLUSIONS\nThis study provides a current master list of nursing informatics competency requirements for nurses at four levels in the U.S. and Taiwan. The results are very similar to the original work of Staggers et al. The results have international relevance because of the global importance of information technology for the nursing profession.",
"title": ""
},
{
"docid": "a7d4881412978a41da17e282f9419bdd",
"text": "Recent studies suggest that judgments of facial masculinity reflect more than sexually dimorphic shape. Here, we investigated whether the perception of masculinity is influenced by facial cues to body height and weight. We used the average differences in three-dimensional face shape of forty men and forty women to compute a morphological masculinity score, and derived analogous measures for facial correlates of height and weight based on the average face shape of short and tall, and light and heavy men. We found that facial cues to body height and weight had substantial and independent effects on the perception of masculinity. Our findings suggest that men are perceived as more masculine if they appear taller and heavier, independent of how much their face shape differs from women's. We describe a simple method to quantify how body traits are reflected in the face and to define the physical basis of psychological attributions.",
"title": ""
},
{
"docid": "0ac0f9965376f5547a2dabd3d06b6b96",
"text": "A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present an approach to generating such summaries, a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of this method with summaries generated by humans, showing that we obtain significantly higher agreement than do earlier methods.",
"title": ""
},
{
"docid": "dbb087a999a784669d2189e1c9cd92c4",
"text": "Home Automation industry is growing rapidly; this is fuelled by provide supporting systems for the elderly and the disabled, especially those who live alone. Coupled with this, the world population is confirmed to be getting older. Home automation systems must comply with the household standards and convenience of usage. This paper details the overall design of a wireless home automation system (WHAS) which has been built and implemented. The automation centers on recognition of voice commands and uses low-power RF ZigBee wireless communication modules which are relatively cheap. The home automation system is intended to control all lights and electrical appliances in a home or office using voice commands. The system has been tested and verified. The verification tests included voice recognition response test, indoor ZigBee communication test. The tests involved a mix of 10 male and female subjects with different Indian languages. 7 different voice commands were sent by each person. Thus the test involved sending a total of 70 commands and 80.05% of these commands were recognized correctly. Keywords— Home automation, ZigBee transceivers, voice streaming, HM 2007, voice recognition. —————————— ——————————",
"title": ""
},
{
"docid": "a66a7210436752b220dc5483c43b03be",
"text": "Automated unit tests are an essential software quality assurance measure that is widely used in practice. In many projects, thus, large volumes of test code have co-evolved with the production code throughout development. Like any other code, test code too may contain faults, affecting the effectiveness, reliability and usefulness of the tests. Furthermore, throughout the software system's ongoing development and maintenance phase, the test code too has to be constantly adapted and maintained. To support detecting problems in test code and improving its quality, we implemented 42 static checks for analyzing JUnit tests. These checks encompass best practices for writing unit tests, common issues observed in using xUnit frameworks, and our experiences collected from several years of providing trainings and reviews of test code for industry and in teaching. The checks can be run using the open source analysis tool PMD. In addition to a description of the implemented checks and their rationale, we demonstrate the applicability of using static analysis for test code by analyzing the unit tests of the open source project JFreeChart.",
"title": ""
},
{
"docid": "a81f7c588440797b96d342dcad59aed0",
"text": "Radio-frequency identification (RFID) technology has recently attracted significant interest in the realm of body-area applications, including both wearables and implants. The presence of the human body in close proximity to the RFID device creates several challenges in terms of design, fabrication, and testing, while also ushering in a whole new realm of opportunities for health care and other body-area applications. With these factors in mind, this article provides a holistic and critical review of design challenges associated with body-area RFID technologies, including operation frequencies, influence of the surrounding biological tissues, antenna design and miniaturization, and conformance to international safety guidelines. Concurrently, a number of fabrication methods are discussed for realizing flexible, conformal, and robust RFID device prototypes. The article concludes by reviewing transformative RFID-based solutions for wearable and implantable applications and discussing the future opportunities and challenges raised. Notably, this is the first time that a comprehensive review has been presented in the area of RFID antennas for body-area applications, addressing challenges specific to on-/in-body RFID operation and spanning a wide range of aspects that include design, fabrication, testing, and, eventually, applications and future directions. As such, the utmost aim of this article is to be a unique point of reference for experts and nonexperts in the field.",
"title": ""
},
{
"docid": "066fdb2deeca1d13218f16ad35fe5f86",
"text": "As manga (Japanese comics) have become common content in many countries, it is necessary to search manga by text query or translate them automatically. For these applications, we must first extract texts from manga. In this paper, we develop a method to detect text regions in manga. Taking motivation from methods used in scene text detection, we propose an approach using classifiers for both connected components and regions. We have also developed a text region dataset of manga, which enables learning and detailed evaluations of methods used to detect text regions. Experiments using the dataset showed that our text detection method performs more effectively than existing methods.",
"title": ""
},
{
"docid": "4d31eda0840ac80874a14b0a9fc2439f",
"text": "We identified a patient who excreted large amounts of methylmalonic acid and malonic acid. In contrast to other patients who have been described with combined methylmalonic and malonic aciduria, our patient excreted much larger amounts of methylmalonic acid than malonic acid. Since most previous patients with this biochemical phenotype have been reported to have deficiency of malonyl-CoA decarboxylase, we assayed malonyl-CoA decarboxylase activity in skin fibroblasts derived from our patient and found the enzyme activity to be normal. We examined four isocaloric (2000 kcal/day) dietary regimes administered serially over a period of 12 days with 3 days devoted to each dietary regimen. These diets were high in carbohydrate, fat or protein, or enriched with medium-chain triglycerides. Diet-induced changes in malonic and methylmalonic acid excretion became evident 24–36 h after initiating a new diet. Total excretion of malonic and methylmalonic acid was greater (p<0.01) during a high-protein diet than during a high-carbohydrate or high-fat diet. A high-carbohydrate, low-protein diet was associated with the lowest levels of malonic and methylmalonic acid excretion. Perturbations in these metabolites were most marked at night. On all dietary regimes, our patient excreted 3–10 times more methylmalonic acid than malonic acid, a reversal of the ratios reported in patients with malonyl-CoA decarboxylase deficiency. Our data support a previous observation that combined malonicand methylmalonic aciduria has aetiologies other than malonyl-CoA decar-boxylase deficiency. The malonic acid to methylmalonic acid ratio in response to dietary intervention may be useful in identifying a subgroup of patients with normal enzyme activity.",
"title": ""
},
{
"docid": "d7a1985750fe10273c27f7f8121640ac",
"text": "The large volumes of data that will be produced by ubiquitous sensors and meters in future smart distribution networks represent an opportunity for the use of data analytics to extract valuable knowledge and, thus, improve Distribution Network Operator (DNO) planning and operation tasks. Indeed, applications ranging from outage management to detection of non-technical losses to asset management can potentially benefit from data analytics. However, despite all the benefits, each application presents DNOs with diverse data requirements and the need to define an adequate approach. Consequently, it is critical to understand the different interactions among applications, monitoring infrastructure and approaches involved in the use of data analytics in distribution networks. To assist DNOs in the decision making process, this work presents some of the potential applications where data analytics are likely to improve distribution network performance and the corresponding challenges involved in its implementation.",
"title": ""
},
{
"docid": "ae5fac207e5d3bf51bffbf2ec01fd976",
"text": "Deep learning has revolutionized the way sensor data are analyzed and interpreted. The accuracy gains these approaches offer make them attractive for the next generation of mobile, wearable and embedded sensory applications. However, state-of-the-art deep learning algorithms typically require a significant amount of device and processor resources, even just for the inference stages that are used to discriminate high-level classes from low-level data. The limited availability of memory, computation, and energy on mobile and embedded platforms thus pose a significant challenge to the adoption of these powerful learning techniques. In this paper, we propose SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms. As a result, SparseSep allows large-scale DNNs and CNNs to run efficiently on mobile and embedded hardware with only minimal impact on inference accuracy. We experiment using SparseSep across a variety of common processors such as the Qualcomm Snapdragon 400, ARM Cortex M0 and M3, and Nvidia Tegra K1, and show that it allows inference for various deep models to execute more efficiently; for example, on average requiring 11.3 times less memory and running 13.3 times faster on these representative platforms.",
"title": ""
},
{
"docid": "6e1013e84468c3809742bbe826598f21",
"text": "Many-light rendering methods replace multi-bounce light transport with direct lighting from many virtual point light sources to allow for simple and efficient computation of global illumination. Lightcuts build a hierarchy over virtual lights, so that surface points can be shaded with a sublinear number of lights while minimizing error. However, the original algorithm needs to run on every shading point of the rendered image. It is well known that the performance of Lightcuts can be improved by exploiting the coherence between individual cuts. We propose a novel approach where we invest into the initial lightcut creation at representative cache records, and then directly interpolate the input lightcuts themselves as well as per-cluster visibility for neighboring shading points. This allows us to improve upon the performance of the original Lightcuts algorithm by a factor of 4−8 compared to an optimized GPU-implementation of Lightcuts, while introducing only a small additional approximation error. The GPU-implementation of our technique enables us to create previews of Lightcuts-based global illumination renderings.",
"title": ""
},
{
"docid": "9b519ba8a3b32d7b5b8a117b2d4d06ca",
"text": "This article reviews the most current practice guidelines in the diagnosis and management of patients born with cleft lip and/or palate. Such patients frequently have multiple medical and social issues that benefit greatly from a team approach. Common challenges include feeding difficulty, nutritional deficiency, speech disorders, hearing problems, ear disease, dental anomalies, and both social and developmental delays, among others. Interdisciplinary evaluation and collaboration throughout a patient's development are essential.",
"title": ""
},
{
"docid": "e7e60cc10b156e67bce5c07866c40bc3",
"text": "JavaScript-based malware attacks have increased in recent years and currently represent a signicant threat to the use of desktop computers, smartphones, and tablets. While static and runtime methods for malware detection have been proposed in the literature, both on the client side, for just-in-time in-browser detection, as well as offline, crawler-based malware discovery, these approaches encounter the same fundamental limitation. Web-based malware tends to be environment-specific, targeting a particular browser, often attacking specic versions of installed plugins. This targeting occurs because the malware exploits vulnerabilities in specific plugins and fails otherwise. As a result, a fundamental limitation for detecting a piece of malware is that malware is triggered infrequently, only showing itself when the right environment is present. We observe that, using fingerprinting techniques that capture and exploit unique properties of browser configurations, almost all existing malware can be made virtually impssible for malware scanners to detect. This paper proposes Rozzle, a JavaScript multi-execution virtual machine, as a way to explore multiple execution paths within a single execution so that environment-specific malware will reveal itself. Using large-scale experiments, we show that Rozzle increases the detection rate for offline runtime detection by almost seven times. In addition, Rozzle triples the effectiveness of online runtime detection. We show that Rozzle incurs virtually no runtime overhead and allows us to replace multiple VMs running different browser configurations with a single Rozzle-enabled browser, reducing the hardware requirements, network bandwidth, and power consumption.",
"title": ""
},
{
"docid": "23c71e8893fceed8c13bf2fc64452bc2",
"text": "Variable stiffness actuators (VSAs) are complex mechatronic devices that are developed to build passively compliant, robust, and dexterous robots. Numerous different hardware designs have been developed in the past two decades to address various demands on their functionality. This review paper gives a guide to the design process from the analysis of the desired tasks identifying the relevant attributes and their influence on the selection of different components such as motors, sensors, and springs. The influence on the performance of different principles to generate the passive compliance and the variation of the stiffness are investigated. Furthermore, the design contradictions during the engineering process are explained in order to find the best suiting solution for the given purpose. With this in mind, the topics of output power, potential energy capacity, stiffness range, efficiency, and accuracy are discussed. Finally, the dependencies of control, models, sensor setup, and sensor quality are addressed.",
"title": ""
},
{
"docid": "046df1ccbc545db05d0d91fe8f73d64a",
"text": "Precise models of the robot inverse dynamics allow the design of significantly more accurate, energy-efficient and more compliant robot control. However, in some cases the accuracy of rigidbody models does not suffice for sound control performance due to unmodeled nonlinearities arising from hydraulic cable dynamics, complex friction or actuator dynamics. In such cases, estimating the inverse dynamics model from measured data poses an interesting alternative. Nonparametric regression methods, such as Gaussian process regression (GPR) or locally weighted projection regression (LWPR), are not as restrictive as parametric models and, thus, offer a more flexible framework for approximating unknown nonlinearities. In this paper, we propose a local approximation to the standard GPR, called local GPR (LGP), for real-time model online-learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR. The approach is shown to have competitive learning performance for high-dimensional data while being sufficiently fast for real-time learning. The effectiveness of LGP is exhibited by a comparison with the state-of-the-art regression techniques, such as GPR, LWPR and ν-SVR. The applicability of the proposed LGP method is demonstrated by real-time online-learning of the inverse dynamics model for robot model-based control on a Barrett WAM robot arm.",
"title": ""
},
{
"docid": "2aa7f98e302bf2e96e16645cd70ff74e",
"text": "Membrane potential and permselectivity are critical parameters for a variety of electrochemically-driven separation and energy technologies. An electric potential is developed when a membrane separates electrolyte solutions of different concentrations, and a permselective membrane allows specific species to be transported while restricting the passage of other species. Ion exchange membranes are commonly used in applications that require advanced ionic electrolytes and span technologies such as alkaline batteries to ammonium bicarbonate reverse electrodialysis, but membranes are often only characterized in sodium chloride solutions. Our goal in this work was to better understand membrane behaviour in aqueous ammonium bicarbonate, which is of interest for closed-loop energy generation processes. Here we characterized the permselectivity of four commercial ion exchange membranes in aqueous solutions of sodium chloride, ammonium chloride, sodium bicarbonate, and ammonium bicarbonate. This stepwise approach, using four different ions in aqueous solution, was used to better understand how these specific ions affect ion transport in ion exchange membranes. Characterization of cation and anion exchange membrane permselectivity, using these ions, is discussed from the perspective of the difference in the physical chemistry of the hydrated ions, along with an accompanying re-derivation and examination of the basic equations that describe membrane potential. In general, permselectivity was highest in sodium chloride and lowest in ammonium bicarbonate solutions, and the nature of both the counter- and co-ions appeared to influence measured permselectivity. The counter-ion type influences the binding affinity between counter-ions and polymer fixed charge groups, and higher binding affinity between fixed charge sites and counter-ions within the membrane decreases the effective membrane charge density. As a result permselectivity decreases. The charge density and polarizability of the co-ions also appeared to influence permselectivity leading to ion-specific effects; co-ions that are charge dense and have low polarizability tended to result in high membrane permselectivity.",
"title": ""
},
{
"docid": "0742dcc602a216e41d3bfe47bffc7d30",
"text": "In this paper we study supervised and semi-supervised classification of e-mails. We consider two tasks: filing e-mails into folders and spam e-mail filtering. Firstly, in a supervised learning setting, we investigate the use of random forest for automatic e-mail filing into folders and spam e-mail filtering. We show that random forest is a good choice for these tasks as it runs fast on large and high dimensional databases, is easy to tune and is highly accurate, outperforming popular algorithms such as decision trees, support vector machines and naïve Bayes. We introduce a new accurate feature selector with linear time complexity. Secondly, we examine the applicability of the semi-supervised co-training paradigm for spam e-mail filtering by employing random forests, support vector machines, decision tree and naïve Bayes as base classifiers. The study shows that a classifier trained on a small set of labelled examples can be successfully boosted using unlabelled examples to accuracy rate of only 5% lower than a classifier trained on all labelled examples. We investigate the performance of co-training with one natural feature split and show that in the domain of spam e-mail filtering it can be as competitive as co-training with two natural feature splits.",
"title": ""
},
{
"docid": "9cdcf6718ace17a768f286c74c0eb11c",
"text": "Trapa bispinosa Roxb. which belongs to the family Trapaceae is a small herb well known for its medicinal properties and is widely used worldwide. Trapa bispinosa or Trapa natans is an important plant of Indian Ayurvedic system of medicine which is used in the problems of stomach, genitourinary system, liver, kidney, and spleen. It is bitter, astringent, stomachic, diuretic, febrifuge, and antiseptic. The whole plant is used in gonorrhea, menorrhagia, and other genital affections. It is useful in diarrhea, dysentery, ophthalmopathy, ulcers, and wounds. These are used in the validated conditions in pitta, burning sensation, dipsia, dyspepsia, hemorrhage, hemoptysis, diarrhea, dysentery, strangely, intermittent fever, leprosy, fatigue, inflammation, urethrorrhea, fractures, erysipelas, lumbago, pharyngitis, bronchitis and general debility, and suppressing stomach and heart burning. Maybe it is due to photochemical content of Trapa bispinosa having high quantity of minerals, ions, namely, Ca, K, Na, Zn, and vitamins; saponins, phenols, alkaloids, H-donation, flavonoids are reported in the plants. Nutritional and biochemical analyses of fruits of Trapa bispinosa in 100 g showed 22.30 and 71.55% carbohydrate, protein contents were 4.40% and 10.80%, a percentage of moisture, fiber, ash, and fat contents were 70.35 and 7.30, 2.05 and 6.35, 2.30 and 8.50, and 0.65 and 1.85, mineral contents of the seeds were 32 mg and 102.85 mg calcium, 1.4 and 3.8 mg Iron, and 121 and 325 mg phosphorus in 100 g, and seeds of Trapa bispinosa produced 115.52 and 354.85 Kcal of energy, in fresh and dry fruits, respectively. Chemical analysis of the fruit and fresh nuts having considerable water content citric acid and fresh fruit which substantiates its importance as dietary food also reported low crude lipid, and major mineral present with confirming good amount of minerals as an iron and manganese potassium were contained in the fruit. Crude fiber, total protein content of the water chestnut kernel, Trapa bispinosa are reported. In this paper, the recent reports on nutritional, phytochemical, and pharmacological aspects of Trapa bispinosa Roxb, as a medicinal and nutritional food, are reviewed.",
"title": ""
},
{
"docid": "599c2f4205f3a0978d0567658daf8be6",
"text": "With increasing audio/video service consumption through unmanaged IP networks, HTTP adaptive streaming techniques have emerged to handle bandwidth limitations and variations. But while it is becoming common to serve multiple clients in one home network, these solutions do not adequately address fine tuned quality arbitration between the multiple streams. While clients compete for bandwidth, the video suffers unstable conditions and/or inappropriate bit-rate levels.\n We hereby experiment a mechanism based on traffic chapping that allow bandwidth arbitration to be implemented in the home gateway, first determining desirable target bit-rates to be reached by each stream and then constraining the clients to stay within their limits. This enables the delivery of optimal quality of experience to the maximum number of users. This approach is validated through experimentation, and results are shown through a set of objective measurement criteria.",
"title": ""
},
{
"docid": "225204d66c371372debb3bb2a37c795b",
"text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.",
"title": ""
}
] |
scidocsrr
|
09e694301e741dd9dbe591b981dec8cb
|
Capturing Business Model Innovation Driven by the Emergence of New Technologies in Established Firms
|
[
{
"docid": "c936e76e8db97b640a4123e66169d1b8",
"text": "Varying philosophical and theoretical orientations to qualitative inquiry remind us that issues of quality and credibility intersect with audience and intended research purposes. This overview examines ways of enhancing the quality and credibility of qualitative analysis by dealing with three distinct but related inquiry concerns: rigorous techniques and methods for gathering and analyzing qualitative data, including attention to validity, reliability, and triangulation; the credibility, competence, and perceived trustworthiness of the qualitative researcher; and the philosophical beliefs of evaluation users about such paradigm-based preferences as objectivity versus subjectivity, truth versus perspective, and generalizations versus extrapolations. Although this overview examines some general approaches to issues of credibility and data quality in qualitative analysis, it is important to acknowledge that particular philosophical underpinnings, specific paradigms, and special purposes for qualitative inquiry will typically include additional or substitute criteria for assuring and judging quality, validity, and credibility. Moreover, the context for these considerations has evolved. In early literature on evaluation methods the debate between qualitative and quantitative methodologists was often strident. In recent years the debate has softened. A consensus has gradually emerged that the important challenge is to match appropriately the methods to empirical questions and issues, and not to universally advocate any single methodological approach for all problems.",
"title": ""
}
] |
[
{
"docid": "ce7fdc16d6d909a4e0c3294ed55af51d",
"text": "In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. We show that, without any language model, Seq2Seq and RNN-Transducer models both outperform the best reported CTC models with a language model, on the popular Hub5'00 benchmark. On our internal diverse dataset, these trends continue — RNN-Transducer models rescored with a language model after beam search outperform our best CTC models. These results simplify the speech recognition pipeline so that decoding can now be expressed purely as neural network operations. We also study how the choice of encoder architecture affects the performance of the three models — when all encoder layers are forward only, and when encoders downsample the input representation aggressively.",
"title": ""
},
{
"docid": "e946deae6e1d441c152dca6e52268258",
"text": "The design of robust and high-performance gaze-tracking systems is one of the most important objectives of the eye-tracking community. In general, a subject calibration procedure is needed to learn system parameters and be able to estimate the gaze direction accurately. In this paper, we attempt to determine if subject calibration can be eliminated. A geometric analysis of a gaze-tracking system is conducted to determine user calibration requirements. The eye model used considers the offset between optical and visual axes, the refraction of the cornea, and Donder's law. This paper demonstrates the minimal number of cameras, light sources, and user calibration points needed to solve for gaze estimation. The underlying geometric model is based on glint positions and pupil ellipse in the image, and the minimal hardware needed for this model is one camera and multiple light-emitting diodes. This paper proves that subject calibration is compulsory for correct gaze estimation and proposes a model based on a single point for subject calibration. The experiments carried out show that, although two glints and one calibration point are sufficient to perform gaze estimation (error ~ 1deg), using more light sources and calibration points can result in lower average errors.",
"title": ""
},
{
"docid": "f268718ceac79dbf8d0dcda2ea6557ca",
"text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.06.003 ⇑ Corresponding author. E-mail addresses: fred.qi@ieee.org (F. Qi), gmshi@x 1 Principal corresponding author. Depth acquisition becomes inexpensive after the revolutionary invention of Kinect. For computer vision applications, depth maps captured by Kinect require additional processing to fill up missing parts. However, conventional inpainting methods for color images cannot be applied directly to depth maps as there are not enough cues to make accurate inference about scene structures. In this paper, we propose a novel fusion based inpainting method to improve depth maps. The proposed fusion strategy integrates conventional inpainting with the recently developed non-local filtering scheme. The good balance between depth and color information guarantees an accurate inpainting result. Experimental results show the mean absolute error of the proposed method is about 20 mm, which is comparable to the precision of the Kinect sensor. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0ab1607237e9fd804a23745113e133ef",
"text": "One of the key tasks of sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. In this work, we focus on using supervised sequence labeling as the base approach to performing the task. Although several extraction methods using sequence labeling methods such as Conditional Random Fields (CRF) and Hidden Markov Models (HMM) have been proposed, we show that this supervised approach can be significantly improved by exploiting the idea of concept sharing across multiple domains. For example, “screen” is an aspect in iPhone, but not only iPhone has a screen, many electronic devices have screens too. When “screen” appears in a review of a new domain (or product), it is likely to be an aspect too. Knowing this information enables us to do much better extraction in the new domain. This paper proposes a novel extraction method exploiting this idea in the context of supervised sequence labeling. Experimental results show that it produces markedly better results than without using the past information.",
"title": ""
},
{
"docid": "b1fabdbfea2fcffc8071371de8399b69",
"text": "Cities across the United States are implementing information communication technologies in an effort to improve government services. One such innovation in e-government is the creation of 311 systems, offering a centralized platform where citizens can request services, report non-emergency concerns, and obtain information about the city via hotline, mobile, or web-based applications. The NYC 311 service request system represents one of the most significant links between citizens and city government, accounting for more than 8,000,000 requests annually. These systems are generating massive amounts of data that, when properly managed, cleaned, and mined, can yield significant insights into the real-time condition of the city. Increasingly, these data are being used to develop predictive models of citizen concerns and problem conditions within the city. However, predictive models trained on these data can suffer from biases in the propensity to make a request that can vary based on socio-economic and demographic characteristics of an area, cultural differences that can affect citizens’ willingness to interact with their government, and differential access to Internet connectivity. Using more than 20,000,000 311 requests together with building violation data from the NYC Department of Buildings and the NYC Department of Housing Preservation and Development; property data from NYC Department of City Planning; and demographic and socioeconomic data from the U.S. Census American Community Survey we develop a two-step methodology to evaluate the propensity to complain: (1) we predict, using a gradient boosting regression model, the likelihood of heating and hot water violations for a given building, and (2) we then compare the actual complaint volume for buildings with predicted violations to quantify discrepancies across the City. Our model predicting service request volumes over time will contribute to the efficiency of the 311 system by informing shortand long-term resource allocation strategy and improving the agency’s performance in responding to requests. For instance, the outcome of our longitudinal pattern analysis allows the city to predict building safety hazards early and take action, leading to anticipatory safety and inspection actions. Furthermore, findings will provide novel insight into equity and community engagement through 311, and provide the basis for acknowledging and accounting for Bloomberg Data for Good Exchange Conference. 24-Sep-2017, Chicago, IL, USA. bias in machine learning applications trained on 311 data.",
"title": ""
},
{
"docid": "44e135418dc6480366bb5679b62bc4f9",
"text": "There is growing interest regarding the role of the right inferior frontal gyrus (RIFG) during a particular form of executive control referred to as response inhibition. However, tasks used to examine neural activity at the point of response inhibition have rarely controlled for the potentially confounding effects of attentional demand. In particular, it is unclear whether the RIFG is specifically involved in inhibitory control, or is involved more generally in the detection of salient or task relevant cues. The current fMRI study sought to clarify the role of the RIFG in executive control by holding the stimulus conditions of one of the most popular response inhibition tasks-the Stop Signal Task-constant, whilst varying the response that was required on reception of the stop signal cue. Our results reveal that the RIFG is recruited when important cues are detected, regardless of whether that detection is followed by the inhibition of a motor response, the generation of a motor response, or no external response at all.",
"title": ""
},
{
"docid": "e2a7ff093714cc6a0543816b3d7c08e9",
"text": "Microblogs such as Twitter reflect the general public’s reactions to major events. Bursty topics from microblogs reveal what events have attracted the most online attention. Although bursty event detection from text streams has been studied before, previous work may not be suitable for microblogs because compared with other text streams such as news articles and scientific publications, microblog posts are particularly diverse and noisy. To find topics that have bursty patterns on microblogs, we propose a topic model that simultaneously captures two observations: (1) posts published around the same time are more likely to have the same topic, and (2) posts published by the same user are more likely to have the same topic. The former helps find eventdriven posts while the latter helps identify and filter out “personal” posts. Our experiments on a large Twitter dataset show that there are more meaningful and unique bursty topics in the top-ranked results returned by our model than an LDA baseline and two degenerate variations of our model. We also show some case studies that demonstrate the importance of considering both the temporal information and users’ personal interests for bursty topic detection from microblogs.",
"title": ""
},
{
"docid": "14e0664fcbc2e29778a1ccf8744f4ca5",
"text": "Mobile offloading migrates heavy computation from mobile devices to cloud servers using one or more communication network channels. Communication interfaces vary in speed, energy consumption and degree of availability. We assume two interfaces: WiFi, which is fast with low energy demand but not always present and cellular, which is slightly slower has higher energy consumption but is present at all times. We study two different communication strategies: one that selects the best available interface for each transmitted packet and the other multiplexes data across available communication channels. Since the latter may experience interrupts in the WiFi connection packets can be delayed. We call it interrupted strategy as opposed to the uninterrupted strategy that transmits packets only over currently available networks. Two key concerns of mobile offloading are the energy use of the mobile terminal and the response time experienced by the user of the mobile device. In this context, we investigate three different metrics that express the energy-performance tradeoff, the known Energy-Response time Weighted Sum (EWRS), the Energy-Response time Product (ERP) and the Energy-Response time Weighted Product (ERWP) metric. We apply the metrics to the two different offloading strategies and find that the conclusions drawn from the analysis depend on the considered metric. In particular, while an additive metric is not normalised, which implies that the term using smaller scale is always favoured, the ERWP metric, which is new in this paper, allows to assign importance to both aspects without being misled by different scales. It combines the advantages of an additive metric and a product. The interrupted strategy can save energy especially if the focus in the tradeoff metric lies on the energy aspect. In general one can say that the uninterrupted strategy is faster, while the interrupted strategy uses less energy. A fast connection improves the response time much more than the fast repair of a failed connection. In conclusion, a short down-time of the transmission channel can mostly be tolerated.",
"title": ""
},
{
"docid": "104c9347338f4e725e3c1907a4991977",
"text": "This paper derives a speech parameter generation algorithm for HMM-based speech synthesis, in which speech parameter sequence is generated from HMMs whose observation vector consists of spectral parameter vector and its dynamic feature vectors. In the algorithm, we assume that the state sequence (state and mixture sequence for the multi-mixture case) or a part of the state sequence is unobservable (i.e., hidden or latent). As a result, the algorithm iterates the forward-backward algorithm and the parameter generation algorithm for the case where state sequence is given. Experimental results show that by using the algorithm, we can reproduce clear formant structure from multi-mixture HMMs as compared with that produced from single-mixture HMMs.",
"title": ""
},
{
"docid": "4d529a33044a1b22a71b4ad2f53f8b65",
"text": "Robotic assistants have the potential to greatly improve our quality of life by supporting us in our daily activities. A service robot acting autonomously in an indoor environment is faced with very complex tasks. Consider the problem of pouring a liquid into a cup, the robot should first determine if the cup is empty or partially filled. RGB-D cameras provide noisy depth measurements which depend on the opaqueness and refraction index of the liquid. In this paper, we present a novel probabilistic approach for estimating the fill-level of a liquid in a cup using an RGB-D camera. Our approach does not make any assumptions about the properties of the liquid like its opaqueness or its refraction index. We develop a probabilistic model using features extracted from RGB and depth data. Our experiments demonstrate the robustness of our method and an improvement over the state of the art.",
"title": ""
},
{
"docid": "d76b7b25bce29cdac24015f8fa8ee5bb",
"text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.",
"title": ""
},
{
"docid": "774c7af1abfde7dd7a4fc858b4b8487e",
"text": "Poorly designed charts are prevalent in reports, magazines, books and on the Web. Most of these charts are only available as bitmap images; without access to the underlying data it is prohibitively difficult for viewers to create more effective visual representations. In response we present ReVision, a system that automatically redesigns visualizations to improve graphical perception. Given a bitmap image of a chart as input, ReVision applies computer vision and machine learning techniques to identify the chart type (e.g., pie chart, bar chart, scatterplot, etc.). It then extracts the graphical marks and infers the underlying data. Using a corpus of images drawn from the web, ReVision achieves image classification accuracy of 96% across ten chart categories. It also accurately extracts marks from 79% of bar charts and 62% of pie charts, and from these charts it successfully extracts data from 71% of bar charts and 64% of pie charts. ReVision then applies perceptually-based design principles to populate an interactive gallery of redesigned charts. With this interface, users can view alternative chart designs and retarget content to different visual styles.",
"title": ""
},
{
"docid": "bf62cf6deb1b11816fa271bfecde1077",
"text": "EASL–EORTC Clinical Practice Guidelines (CPG) on the management of hepatocellular carcinoma (HCC) define the use of surveillance, diagnosis, and therapeutic strategies recommended for patients with this type of cancer. This is the first European joint effort by the European Association for the Study of the Liver (EASL) and the European Organization for Research and Treatment of Cancer (EORTC) to provide common guidelines for the management of hepatocellular carcinoma. These guidelines update the recommendations reported by the EASL panel of experts in HCC published in 2001 [1]. Several clinical and scientific advances have occurred during the past decade and, thus, a modern version of the document is urgently needed. The purpose of this document is to assist physicians, patients, health-care providers, and health-policy makers from Europe and worldwide in the decision-making process according to evidencebased data. Users of these guidelines should be aware that the recommendations are intended to guide clinical practice in circumstances where all possible resources and therapies are available. Thus, they should adapt the recommendations to their local regulations and/or team capacities, infrastructure, and cost– benefit strategies. Finally, this document sets out some recommendations that should be instrumental in advancing the research and knowledge of this disease and ultimately contribute to improve patient care. The EASL–EORTC CPG on the management of hepatocellular carcinoma provide recommendations based on the level of evi-",
"title": ""
},
{
"docid": "e0b7efd5d3bba071ada037fc5b05a622",
"text": "Social exclusion can thwart people's powerful need for social belonging. Whereas prior studies have focused primarily on how social exclusion influences complex and cognitively downstream social outcomes (e.g., memory, overt social judgments and behavior), the current research examined basic, early-in-the-cognitive-stream consequences of exclusion. Across 4 experiments, the threat of exclusion increased selective attention to smiling faces, reflecting an attunement to signs of social acceptance. Compared with nonexcluded participants, participants who experienced the threat of exclusion were faster to identify smiling faces within a \"crowd\" of discrepant faces (Experiment 1), fixated more of their attention on smiling faces in eye-tracking tasks (Experiments 2 and 3), and were slower to disengage their attention from smiling faces in a visual cueing experiment (Experiment 4). These attentional attunements were specific to positive, social targets. Excluded participants did not show heightened attention to faces conveying social disapproval or to positive nonsocial images. The threat of social exclusion motivates people to connect with sources of acceptance, which is manifested not only in \"downstream\" choices and behaviors but also at the level of basic, early-stage perceptual processing.",
"title": ""
},
{
"docid": "7908cc9a1cd6e6f48258a300db37d4a5",
"text": "This report describes the algorithms implemented in a Matlab toolbox for change detection and data segmentation. Functions are provided for simulating changes, choosing design parameters and detecting abrupt changes in signals.",
"title": ""
},
{
"docid": "e3a4b77f05ed29b0643a1d699d747415",
"text": "This letter develops an optical pixel sensor that is based on hydrogenated amorphous silicon thin-film transistors. Exploiting the photo sensitivity of the photo TFTs and combining different color filters, the proposed sensor can sense an optical input signal of a specified color under high ambient illumination conditions. Measurements indicate that the proposed pixel sensor effectively reacts to the optical input signal under light intensities from 873 to 12,910 lux, proving that the sensor is highly reliable under strong ambient illumination.",
"title": ""
},
{
"docid": "a8f391b630a0261a0693c7038370411a",
"text": "In this paper, we address the problem of globally localizing and tracking the pose of a camera-equipped micro aerial vehicle (MAV) flying in urban streets at low altitudes without GPS. An image-based global positioning system is introduced to localize the MAV with respect to the surrounding buildings. We propose a novel airground image-matching algorithm to search the airborne image of the MAV within a ground-level, geotagged image database. Based on the detected matching image features, we infer the global position of the MAV by back-projecting the corresponding image points onto a cadastral three-dimensional city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry whenever a good match is detected between the airborne and the ground-level images. The proposed approach is tested on a 2 km trajectory with a small quadrocopter flying in the streets of Zurich. Our vision-based global localization can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus outperforming conventional visual placerecognition approaches. The dataset is made publicly available to the research community. To the best of our knowledge, this is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera. C © 2015 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "18e77bde932964655ba7df73b02a3048",
"text": "In this paper, we propose a mathematical framework to jointly model related activities with both motion and context information for activity recognition and anomaly detection. This is motivated from observations that activities related in space and time rarely occur independently and can serve as context for each other. The spatial and temporal distribution of different activities provides useful cues for the understanding of these activities. We denote the activities occurring with high frequencies in the database as normal activities. Given training data which contains labeled normal activities, our model aims to automatically capture frequent motion and context patterns for each activity class, as well as each pair of classes, from sets of predefined patterns during the learning process. Then, the learned model is used to generate globally optimum labels for activities in the testing videos. We show how to learn the model parameters via an unconstrained convex optimization problem and how to predict the correct labels for a testing instance consisting of multiple activities. The learned model and generated labels are used to detect anomalies whose motion and context patterns deviate from the learned patterns. We show promising results on the VIRAT Ground Dataset that demonstrates the benefit of joint modeling and recognition of activities in a wide-area scene and the effectiveness of the proposed method in anomaly detection.",
"title": ""
},
{
"docid": "eee9bbc4e57981813a45114061ef01ec",
"text": "Although Marx-bank connection of avalanche transistors is widely used in applications requiring high-voltage nanosecond and subnanosecond pulses, the physical mechanisms responsible for the voltage-ramp-initiated switching of a single transistor in the Marx chain remain unclear. It is shown here by detailed comparison of experiments with physical modeling that picosecond switching determined by double avalanche injection in the collector-base diode gives way to formation and shrinkage of the collector field domain typical of avalanche transistors under the second breakdown. The latter regime, characterized by a lower residual voltage, becomes possible despite a short-connected emitter and base, thanks to the 2-D effects.",
"title": ""
},
{
"docid": "424f871e0e2eabf8b1e636f73d0b1c7d",
"text": "Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.",
"title": ""
}
] |
scidocsrr
|
e32ce773290401147c93d3008df65965
|
A Real Time System for Robust 3D Voxel Reconstruction of Human Motions
|
[
{
"docid": "4f3b91bfaa2304e78ad5cd305fb5d377",
"text": "The construction of a three-dimensional object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, we use an octree, which represents the object as a tree of recursively subdivided cubes. We develop a new algorithm for computing the octree bounding volume from multiple silhouettes and apply it to an object rotating on a turntable in front of a stationary camera. The algorithm performs a limited amount of processing for each viewpoint and incrementally builds the volumetric model. The resulting algorithm requires less total computation than previous algorithms, runs in close to real-time, and builds a model whose resolution improves over time. 1993 Academic Press, Inc.",
"title": ""
}
] |
[
{
"docid": "5e5fcac49c2ee3f944dbc02fe70461cd",
"text": "Microkernels long discarded as unacceptable because of their lower performance compared with monolithic kernels might be making a comeback in operating systems due to their potentially higher reliability, which many researchers now regard as more important than performance. Each of the four different attempts to improve operating system reliability focuses on preventing buggy device drivers from crashing the system. In the Nooks approach, each driver is individually hand wrapped in a software jacket to carefully control its interactions with the rest of the operating system, but it leaves all the drivers in the kernel. The paravirtual machine approach takes this one step further and moves the drivers to one or more machines distinct from the main one, taking away even more power from the drivers. Both of these approaches are intended to improve the reliability of existing (legacy) operating systems. In contrast, two other approaches replace legacy operating systems with more reliable and secure ones. The multiserver approach runs each driver and operating system component in a separate user process and allows them to communicate using the microkernel's IPC mechanism. Finally, Singularity, the most radical approach, uses a type-safe language, a single address space, and formal contracts to carefully limit what each module can do.",
"title": ""
},
{
"docid": "e0d040efd131db568d875b80c6adc111",
"text": "Familism is a cultural value that emphasizes interdependent family relationships that are warm, close, and supportive. We theorized that familism values can be beneficial for romantic relationships and tested whether (a) familism would be positively associated with romantic relationship quality and (b) this association would be mediated by less attachment avoidance. Evidence indicates that familism is particularly relevant for U.S. Latinos but is also relevant for non-Latinos. Thus, we expected to observe the hypothesized pattern in Latinos and explored whether the pattern extended to non-Latinos of European and East Asian cultural background. A sample of U.S. participants of Latino (n 1⁄4 140), European (n 1⁄4 176), and East Asian (n 1⁄4 199) cultural background currently in a romantic relationship completed measures of familism, attachment, and two indices of romantic relationship quality, namely, partner support and partner closeness. As predicted, higher familism was associated with higher partner support and partner closeness, and these associations were mediated by lower attachment avoidance in the Latino sample. This pattern was not observed in the European or East Asian background samples. The implications of familism for relationships and psychological processes relevant to relationships in Latinos and non-Latinos are discussed. 1 University of California, Irvine, USA 2 University of California, Los Angeles, USA Corresponding author: Belinda Campos, Department of Chicano/Latino Studies, University of California, Irvine, 3151 Social Sciences Plaza A, Irvine, CA 92697, USA. Email: bcampos@uci.edu Journal of Social and Personal Relationships 1–20 a The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0265407514562564 spr.sagepub.com J S P R at UNIV CALIFORNIA IRVINE on January 5, 2015 spr.sagepub.com Downloaded from",
"title": ""
},
{
"docid": "2dad5e4cc93246fd64b576d414fb5a3e",
"text": "Intelligent vehicles use advanced driver assistance systems (ADASs) to mitigate driving risks. There is increasing demand for an ADAS framework that can increase driving safety by detecting dangerous driving behavior from driver, vehicle, and lane attributes. However, because dangerous driving behavior in real-world driving scenarios can be caused by any or a combination of driver, vehicle, and lane attributes, the detection of dangerous driving behavior using conventional approaches that focus on only one type of attribute may not be sufficient to improve driving safety in realistic situations. To facilitate driving safety improvements, the concept of dangerous driving intensity (DDI) is introduced in this paper, and the objective of dangerous driving behavior detection is converted into DDI estimation based on the three attribute types. To this end, we propose a framework, wherein fuzzy sets are optimized using particle swarm optimization for modeling driver, vehicle, and lane attributes and then used to accurately estimate the DDI. The mean opinion scores of experienced drivers are employed to label DDI for a fair comparison with the results of our framework. The experimental results demonstrate that the driver, vehicle, and lane attributes defined in this paper provide useful cues for DDI analysis; furthermore, the results obtained using the framework are in favorable agreement with those obtained in the perception study. The proposed framework can greatly increase driving safety in intelligent vehicles, where most of the driving risk is within the control of the driver.",
"title": ""
},
{
"docid": "19bd7a6c21dd50c5dc8d14d5cfd363ab",
"text": "Frontotemporal dementia (FTD) is one of the most common forms of dementia in persons younger than 65 years. Variants include behavioral variant FTD, semantic dementia, and progressive nonfluent aphasia. Behavioral and language manifestations are core features of FTD, and patients have relatively preserved memory, which differs from Alzheimer disease. Common behavioral features include loss of insight, social inappropriateness, and emotional blunting. Common language features are loss of comprehension and object knowledge (semantic dementia), and nonfluent and hesitant speech (progressive nonfluent aphasia). Neuroimaging (magnetic resonance imaging) usually demonstrates focal atrophy in addition to excluding other etiologies. A careful history and physical examination, and judicious use of magnetic resonance imaging, can help distinguish FTD from other common forms of dementia, including Alzheimer disease, dementia with Lewy bodies, and vascular dementia. Although no cure for FTD exists, symptom management with selective serotonin reuptake inhibitors, antipsychotics, and galantamine has been shown to be beneficial. Primary care physicians have a critical role in identifying patients with FTD and assembling an interdisciplinary team to care for patients with FTD, their families, and caregivers.",
"title": ""
},
{
"docid": "61096a0d1e94bb83f7bd067b06d69edd",
"text": "A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of “overfitting”, defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, “slow” convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the normalized weight matrix at each layer of a deep network converges to a minimum norm solution (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for predicting the generalization performance of different zero minimizers of the empirical loss. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 80 6. 11 37 9v 1 [ cs .L G ] 2 9 Ju n 20 18 Theory IIIb: Generalization in Deep Networks Tomaso Poggio ∗1, Qianli Liao1, Brando Miranda1, Andrzej Banburski1, Xavier Boix1, and Jack Hidary2 1Center for Brains, Minds and Machines, MIT 2Alphabet (Google) X",
"title": ""
},
{
"docid": "ed7a1b09c68876e17679f6e61635bbb8",
"text": "Diminished antioxidant defense or increased production of reactive oxygen species in the biological system can result in oxidative stress which may lead to various neurodegenerative diseases including Alzheimer’s disease (AD). Microglial activation also contributes to the progression of AD by producing several proinflammatory cytokines, nitric oxide (NO) and prostaglandin E2 (PGE2). Oxidative stress and inflammation have been reported to be possible pathophysiological mechanisms underlying AD. In addition, the cholinergic hypothesis postulates that memory impairment in patient with AD is also associated with the deficit of cholinergic function in the brain. Although a number of drugs have been approved for the treatment of AD, most of these synthetic drugs have diverse side effects and yield relatively modest benefits. Marine algae have great potential in pharmaceutical and biomedical applications as they are valuable sources of bioactive properties such as anticoagulation, antimicrobial, antioxidative, anticancer and anti-inflammatory. Hence, this study aimed to provide an overview of the properties of Malaysian seaweeds (Padina australis, Sargassum polycystum and Caulerpa racemosa) in inhibiting oxidative stress, neuroinflammation and cholinesterase enzymes. These seaweeds significantly exhibited potent DPPH and moderate superoxide anion radical scavenging ability (P<0.05). Hexane and methanol extracts of S. polycystum exhibited the most potent radical scavenging ability with IC50 values of 0.157±0.004mg/ml and 0.849±0.02mg/ml for DPPH and ABTS assays, respectively. Hexane extract of C. racemosa gave the strongest superoxide radical inhibitory effect (IC50 of 0.386±0.01mg/ml). Most seaweed extracts significantly inhibited the production of cytokine (IL-6, IL-1 β, TNFα) and NO in a concentration-dependent manner without causing significant cytotoxicity to the lipopolysaccharide (LPS)-stimulated microglia cells (P<0.05). All extracts suppressed cytokine and NO level by more than 50% at the concentration of 0.4mg/ml. In addition, C. racemosa and S. polycystum also showed anti-acetylcholinesterase activities with the IC50 values ranging from 0.086-0.115 mg/ml. Moreover, C. racemosa and P. australis were also found to be active against butyrylcholinesterase with IC50 values ranging from 0.1180.287 mg/ml. Keywords—Anticholinesterase, antioxidative, neuroinflammation, seaweeds. Siti Aisya Gany and Swee Ching Tan are with the School of Postgraduate Studies, International Medical University, Jalan Jalil Perkasa 19, 57000 Kuala Lumpur, Malaysia (e-mail: aisyagany@gmail.com, sweeching_tan89@yahoo.com). Sook Yee Gan is with School of Pharmacy, International Medical University, Jalan Jalil Perkasa 19, 57000 Kuala Lumpur, Malaysia (corresponding author: phone: 603-27317518; fax: 603-86567228; e-mail: sookyee_gan@imu.edu.my).",
"title": ""
},
{
"docid": "3f2e76d16149b2591262befc0957e4e2",
"text": "In order to improve the performance of the high-speed brushless direct current motor drives, a novel high-precision sensorless drive has been developed. It is well known that the inevitable voltage pulses, which are generated during the commutation periods, will impact the rotor position detecting accuracy, and further impact the performance of the overall sensorless drive, especially in the higher speed range or under the heavier load conditions. For this reason, the active compensation method based on the virtual third harmonic back electromotive force incorporating the SFF-SOGI-PLL (synchronic-frequency filter incorporating the second-order generalized integrator based phase-locked loop) is proposed to precise detect the commutation points for sensorless drive. An experimental driveline system used for testing the electrical performance of the developed magnetically suspended motor is built. The mathematical analysis and the comparable experimental results have been shown to validate the effectiveness of the proposed sensorless drive algorithm.",
"title": ""
},
{
"docid": "3f1a546477d02b09016472574a6f3f6a",
"text": "The paper mainly focusses on an improved voice activity detection algorithm employing long-term signal processing and maximum spectral component tracking. The benefits of this approach have been analyzed in a previous work (Ramirez, J. et al., Proc. EUROSPEECH 2003, p.3041-4, 2003) with clear improvements in speech/non-speech discriminability and speech recognition performance in noisy environments. Two clear aspects are now considered. The first one, which improves the performance of the VAD in low noise conditions, considers an adaptive length frame window to track the long-term spectral components. The second one reduces misclassification errors in highly noisy environments by using a noise reduction stage before the long-term spectral tracking. Experimental results show clear improvements over different VAD methods in speech/pause discrimination and speech recognition performance. Particularly, improvements in recognition rate were reported when the proposed VAD replaced the VADs of the ETSI advanced front-end (AFE) for distributed speech recognition (DSR).",
"title": ""
},
{
"docid": "358598f23ee536a22e3dc15ba67e095f",
"text": "A new mechanism to balance an autonomous unicycle is explored which makes use of a simple pendulum. Mounted laterally on the unicycle chassis, the pendulum provides a means of controlling the unicycle balance in the lateral (left-right) direction. Longitudinal (forward-backward) balance is achieved by controlling the unicycle wheel, a mechanism exactly the same as that of wheeled inverted pendulum. In this paper, the pendulum-balancing concept is explained and the dynamics model of an autonomous unicycle balanced by such mechanism is derived by Lagrange-Euler formulation. The behavior is analyzed by dynamic simulation in MATLAB. Dynamics comparison with wheeled inverted pendulum and Acrobot is also performed.",
"title": ""
},
{
"docid": "17b8bff80cf87fb7e3c6c729bb41c99e",
"text": "Off-policy reinforcement learning enables near-optimal policy from suboptimal experience, thereby provisions opportunity for artificial intelligence applications in healthcare. Previous works have mainly framed patient-clinician interactions as Markov decision processes, while true physiological states are not necessarily fully observable from clinical data. We capture this situation with partially observable Markov decision process, in which an agent optimises its actions in a belief represented as a distribution of patient states inferred from individual history trajectories. A Gaussian mixture model is fitted for the observed data. Moreover, we take into account the fact that nuance in pharmaceutical dosage could presumably result in significantly different effect by modelling a continuous policy through a Gaussian approximator directly in the policy space, i.e. the actor. To address the challenge of infinite number of possible belief states which renders exact value iteration intractable, we evaluate and plan for only every encountered belief, through heuristic search tree by tightly maintaining lower and upper bounds of the true value of belief. We further resort to function approximations to update value bounds estimation, i.e. the critic, so that the tree search can be improved through more compact bounds at the fringe nodes that will be back-propagated to the root. Both actor and critic parameters are learned via gradient-based approaches. Our proposed policy trained from real intensive care unit data is capable of dictating dosing on vasopressors and intravenous fluids for sepsis patients that lead to the best patient outcomes.",
"title": ""
},
{
"docid": "63fef6099108f7990da0a7687e422e14",
"text": "The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants.",
"title": ""
},
{
"docid": "7c1b301e45da5af0f5248f04dbf33f75",
"text": "[1] We invert 115 differential interferograms derived from 47 synthetic aperture radar (SAR) scenes for a time-dependent deformation signal in the Santa Clara valley, California. The time-dependent deformation is calculated by performing a linear inversion that solves for the incremental range change between SAR scene acquisitions. A nonlinear range change signal is extracted from the ERS InSAR data without imposing a model of the expected deformation. In the Santa Clara valley, cumulative land uplift is observed during the period from 1992 to 2000 with a maximum uplift of 41 ± 18 mm centered north of Sunnyvale. Uplift is also observed east of San Jose. Seasonal uplift and subsidence dominate west of the Silver Creek fault near San Jose with a maximum peak-to-trough amplitude of 35 mm. The pattern of seasonal versus long-term uplift provides constraints on the spatial and temporal characteristics of water-bearing units within the aquifer. The Silver Creek fault partitions the uplift behavior of the basin, suggesting that it acts as a hydrologic barrier to groundwater flow. While no tectonic creep is observed along the fault, the development of a low-permeability barrier that bisects the alluvium suggests that the fault has been active since the deposition of Quaternary units.",
"title": ""
},
{
"docid": "6724f1e8a34a6d9f64a30061ce7f67c0",
"text": "Mental contrasting with implementation intentions (MCII) has been found to improve selfregulation across many life domains. The present research investigates whether MCII can benefit time management. In Study 1, we asked students to apply MCII to a pressing academic problem and assessed how they scheduled their time for the upcoming week. MCII participants scheduled more time than control participants who in their thoughts either reflected on similar contents using different cognitive procedures (content control group) or applied the same cognitive procedures on different contents (format control group). In Study 2, students were taught MCII as a metacognitive strategy to be used on any upcoming concerns of the subsequent week. As compared to the week prior to the training, students in the MCII (vs. format control) condition improved in self-reported time management. In Study 3, MCII (vs. format control) helped working mothers who enrolled in a vocational business program to attend classes more regularly. The findings suggest that performing MCII on one’s everyday concerns improves time management.",
"title": ""
},
{
"docid": "e84a03caf97b5a7ee1007c0eab78664d",
"text": "We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and higher probabilities to mini-batches with more diverse data. This simultaneously balances the data and leads to stochastic gradients with lower variance. We term this approach Balanced Mini-batch SGD (BM-SGD). We show that regular SGD and stratified sampling emerge as special cases. Furthermore, BM-SGD can be considered a generalization of stratified sampling to cases where no discrete features exist to bin the data into groups. We show experimentally that our method results more interpretable and diverse features in unsupervised setups, and in better classification accuracies in supervised setups.",
"title": ""
},
{
"docid": "69624e1501b897bf1a9f9a5a84132da3",
"text": "360° videos and Head-Mounted Displays (HMDs) are geing increasingly popular. However, streaming 360° videos to HMDs is challenging. is is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra eorts to align the content and sensor data using the timestamps in the raw log les. e resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming",
"title": ""
},
{
"docid": "3fb2635846f2339dbd68839c1359047e",
"text": "The objectives were: (i) to present a method for assessing muscle pain during exercise, (ii) to provide reliability and validity data in support of the measurement tool, (iii) to test whether leg muscle pain threshold during exercise was related to a commonly used measure of pain threshold pain during test, (iv) to examine the relationship between pain and exertion ratings, (v) to test whether leg muscle pain is related to performance, and (vi) to test whether a large dose of aspirin would delay leg muscle pain threshold and/or reduce pain ratings during exercise. In study 1, seven females and seven males completed three 1-min cycling bouts at three different randomly ordered power outputs. Pain was assessed using a 10-point pain scale. High intraclass correlations (R from 0.88 to 0.98) indicated that pain intensity could be rated reliably using the scale. In study 2, 11 college-aged males (age 21.3 +/- 1.3 yr) performed a ramped (24 W.min-1) maximal cycle ergometry test. A button was depressed when leg muscle pain threshold was reached. Pain threshold occurred near 50% of maximal capacity: 50.3 (+/- 12.9% Wmax), 48.6 (+/- 14.8% VO2max), and 55.8 (+/- 12.9% RPEmax). Pain intensity ratings obtained following pain threshold were positively accelerating function of the relative exercise intensity. Volitional exhaustion was associated with pain ratings of 8.2 (+/- 2.5), a value most closely associated with the verbal anchor \"very strong pain.\" In study 3, participants completed the same maximal exercise test as in study 2 as well as leg cycling at 60 rpm for 8 s at four randomly ordered power outputs (100, 150, 200, and 250 W) on a separate day. Pain and RPE ratings were significantly lower during the 8-s bouts compared to those obtained at the same power outputs during the maximal cycle test. The results suggest that noxious metabolites of muscle contraction play a role in leg muscle pain during exercise. In study 4, moderately active male subjects (N = 19) completed two ramped maximal cycle ergometry tests. Subjects drank a water and Kool-Aid mixture, that either was or was not (placebo) combined with a 20 mg.kg-1 dose of powdered aspirin 60 min before exercise. Paired t-tests revealed no differences between conditions for the measures of exercise intensity at pain threshold [aspirin vs placebo mean (+/- SD)]: power output: 150 (+/- 60.3 W) versus 153.5 (+/- 64.8 W); VO2: 21.3 (+/- 8.6 mL.kg-1.min-1) versus 22.1 (+/- 10.0 mL.kg-1.min-1); and RPE: 10.9 (+/- 3.1) versus 11.4 (+/- 2.9). Repeated measures ANOVA revealed no significant condition main effect or condition by trial interaction for pain responses during recovery or during exercise at 60, 70, 80, 90, and 100% of each condition's peak power output. It is concluded that the perception of leg muscle pain intensity during cycle ergometry: (i) is reliably and validly measured using the developed 10-point pain scale, (ii) covaries as a function of objective exercise stimuli such as power output, (iii) is distinct from RPE, (iv) is unrelated to performance of the type employed here, and (v) is not altered by the ingestion of 20 mg.kg-1 acetylsalicylic acid 1 h prior to the exercise bout.",
"title": ""
},
{
"docid": "a6f2cee851d2c22d471f473caf1710a1",
"text": "One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: <inline-formula><tex-math notation=\"LaTeX\">$3f+1$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq1-2495213.gif\"/></alternatives></inline-formula> replicas are required to tolerate only <inline-formula><tex-math notation=\"LaTeX\">$f$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq2-2495213.gif\"/></alternatives></inline-formula> faults. Recent works have been able to reduce the minimum number of replicas to <inline-formula><tex-math notation=\"LaTeX\">$2f+1$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq3-2495213.gif\"/></alternatives></inline-formula> by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents <italic>Resource-efficient Byzantine Fault Tolerance</italic> (<sc>ReBFT</sc>), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping <inline-formula> <tex-math notation=\"LaTeX\">$f$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq4-2495213.gif\"/> </alternatives></inline-formula> replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply <sc>ReBFT</sc> to two existing BFT systems: PBFT and MinBFT.",
"title": ""
},
{
"docid": "a05b34697055678a607ab4db4d87fa07",
"text": "This paper presents a novel set of image descriptors that encodes information from color, shape, spatial and local features of an image to improve upon the popular Pyramid of Histograms of Oriented Gradients (PHOG) descriptor for object and scene image classification. In particular, a new Gabor-PHOG (GPHOG) image descriptor created by enhancing the local features of an image using multiple Gabor filters is first introduced for feature extraction. Second, a comparative assessment of the classification performance of the GPHOG descriptor is made in grayscale and six different color spaces to further propose two novel color GPHOG descriptors that perform well on different object and scene image categories. Finally, an innovative Fused Color GPHOG (FC–GPHOG) descriptor is presented by integrating the Principal Component Analysis (PCA) features of the GPHOG descriptors in the six color spaces to combine color, shape and local feature information. Feature extraction for the proposed descriptors employs PCA and Enhanced Fisher Model (EFM), and the nearest neighbor rule is used for final classification. Experimental results using the MIT Scene dataset and the Caltech 256 object categories dataset show that the proposed new FC–GPHOG descriptor achieves a classification performance better than or comparable to other popular image descriptors, such as the Scale Invariant Feature Transform (SIFT) based Pyramid Histograms of visual Words descriptor, Color SIFT four Concentric Circles, Spatial Envelope, and Local Binary Patterns.",
"title": ""
},
{
"docid": "c8bbc713aecbc6682d21268ee58ca258",
"text": "Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X,Y )⇒ nationality(X,Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.",
"title": ""
}
] |
scidocsrr
|
23eb9aea042e83050378f7c8b5e832c2
|
CARET model checking for malware detection
|
[
{
"docid": "453af7094a854afd1dfb2e7dc36a7cca",
"text": "In this paper, we propose a new approach for the static detection of malicious code in executable programs. Our approach rests on a semantic analysis based on behaviour that even makes possible the detection of unknown malicious code. This analysis is carried out directly on binary code. Static analysis offers techniques for predicting properties of the behaviour of programs without running them. The static analysis of a given binary executable is achieved in three major steps: construction of an intermediate representation, flow-based analysis that catches securityoriented program behaviour, and static verification of critical behaviours against security policies (model checking). 1. Motivation and Background With the advent and the rising popularity of networks, Internet, intranets and distributed systems, security is becoming one of the focal points of research. As a matter of fact, more and more people are concerned with malicious code that could exist in software products. A malicious code is a piece of code that can affect the secrecy, the integrity, the data and control flow, and the functionality of a system. Therefore, ∗This research is jointly funded by a research grant from the Natural Sciences and Engineering Research Council, NSERC, Canada and also by a research contract from the Defence Research Establishment, Valcartier (DREV), 2459, Pie XI Nord, Val-Bélair, QC, Canada, G3J 1X5 their detection is a major concern within the computer science community as well as within the user community. As malicious code can affect the data and control flow of a program, static flow analysis may naturally be helpful as part of the detection process. In this paper, we address the problem of static detection of malicious code in binary executables. The primary objective of this research initiative is to elaborate practical methods and tools with robust theoretical foundations for the static detection of malicious code. The rest of the paper is organized in the following way. Section 2 is devoted to a comparison of static and dynamic approaches. Section 3 presents our approach to the detection of malices in binary executable code. Section 4 discusses the implementation of our approach. Finally, a few remarks and a discussion of future research are ultimately sketched as a conclusion in Section 5. 2. Static vs dynamic analysis There are two main approaches for the detection of malices : static analysis and dynamic analysis. Static analysis consists in examining the code of programs to determine properties of the dynamic execution of these programs without running them. This technique has been used extensively in the past by compiler developers to carry out various analyses and transformations aiming at optimizing the code [10]. Static analysis is also used in reverse engineering of software systems and for program understanding [3, 4]. Its use for the detection of malicious code is fairly recent. Dynamic analysis mainly consists in monitoring the execution of a program to detect malicious behaviour. Static analysis has the following advantages over dynamic analysis: • Static analysis techniques permit to make exhaustive analysis. They are not bound to a specific execution of a program and can give guarantees that apply to all executions of the program. In contrast, dynamic analysis techniques only allow examination of behaviours that correspond to selected test cases. • A verdict can be given before execution, where it may be difficult to determine the proper action to take in the presence of malices. • There is no run-time overhead. However, it may be impossible to certify statically that certain properties hold (e.g., due to undecidability). In this case, dynamic monitoring may be the only solution. Thus, static analysis and dynamic analysis are complementary. Static analysis can be used first, and properties that cannot be asserted statically can be monitored dynamically. As mentioned in the introduction, in this paper, we are concerned with static analysis techniques. Not much has been published about their use for the detection of malicious code. In [8], the authors propose a method for statically detecting malicious code in C programs. Their method is based on so-called tell-tale signs, which are program properties that allow one to distinguish between malicious and benign programs. The authors combine the tell-tale sign approach with program slicing in order to produce small fragments of large programs that can be easily analyzed. 3. Description of the Approach Static analysis techniques are generally used to operate on source code. However, as we explained in the introduction, we need to apply them to binary code, and thus, we had to adapt and evolve these techniques. Our approach is structured in three major steps: Firstly, the binary code is translated into an internal intermediate form (see Section 3.1) ; secondly, this intermediate form is abstracted through flowbased analysis as various relevant graphs (controlflow graph, data-flow graph, call graph, critical-API 1 graph, etc.) (Section 3.2); the third step is the static verification and consists in checking these graphs against security policies (Section 3.3). 3.1 Intermediate Representation A binary executable is the machine code version of a high-level or assembly program that has been compiled (or assembled) and linked for a particular platform and operating system. The general format of binary executables varies widely among operating systems. For example, the Portable Executable format (PE) is used by the Windows NT/98/95 operating system. The PE format includes comprehensive information about the different sections of the program that form the main part of the file, including the following segments: • .text, which contains the code and the entry point of the application, • .data, which contains various type of data, • .idata and .edata, which contain respectively the list of imported and exported APIs for an application or a Dynamic-Linking Library (DLL). The code segment (.text) constitutes the main part of the file; in fact, this section contains all the code that is to be analyzed. In order to translate an executable program into an equivalent high-level-language program, we use the disassembly tool IDA32 Pro [7], which can disassemble various types of executable files (ELF, EXE, PE, etc.) for several processors and operating systems (Windows 98, Windows NT, etc.). Also, IDA32 automatically recognizes calls to the standard libraries (i.e., API calls) for a long list of compilers. Statically analysing a program requires the construction of the syntax tree of this program, also called intermediate representation. The various techniques of static analysis are based on this abstract representation. The goal of the first step is to disassemble the binary code and then to parse the assembly code thus generated to produce the syntax tree (Figure 1). API: Application Program Interface.",
"title": ""
}
] |
[
{
"docid": "270def19bfb0352d38d30ed8389d6c2a",
"text": "Morphology plays an important role in behavioral and locomotion strategies of living and artificial systems. There is biological evidence that adaptive morphological changes can not only extend dynamic performances by reducing tradeoffs during locomotion but also provide new functionalities. In this article, we show that adaptive morphology is an emerging design principle in robotics that benefits from a new generation of soft, variable-stiffness, and functional materials and structures. When moving within a given environment or when transitioning between different substrates, adaptive morphology allows accommodation of opposing dynamic requirements (e.g., maneuverability, stability, efficiency, and speed). Adaptive morphology is also a viable solution to endow robots with additional functionalities, such as transportability, protection, and variable gearing. We identify important research and technological questions, such as variable-stiffness structures, in silico design tools, and adaptive control systems to fully leverage adaptive morphology in robotic systems.",
"title": ""
},
{
"docid": "54d293423026d84bce69e8e073ebd6ac",
"text": "AIMS\nPredictors of Response to Cardiac Resynchronization Therapy (CRT) (PROSPECT) was the first large-scale, multicentre clinical trial that evaluated the ability of several echocardiographic measures of mechanical dyssynchrony to predict response to CRT. Since response to CRT may be defined as a spectrum and likely influenced by many factors, this sub-analysis aimed to investigate the relationship between baseline characteristics and measures of response to CRT.\n\n\nMETHODS AND RESULTS\nA total of 286 patients were grouped according to relative reduction in left ventricular end-systolic volume (LVESV) after 6 months of CRT: super-responders (reduction in LVESV > or =30%), responders (reduction in LVESV 15-29%), non-responders (reduction in LVESV 0-14%), and negative responders (increase in LVESV). In addition, three subgroups were formed according to clinical and/or echocardiographic response: +/+ responders (clinical improvement and a reduction in LVESV > or =15%), +/- responders (clinical improvement or a reduction in LVESV > or =15%), and -/- responders (no clinical improvement and no reduction in LVESV > or =15%). Differences in clinical and echocardiographic baseline characteristics between these subgroups were analysed. Super-responders were more frequently females, had non-ischaemic heart failure (HF), and had a wider QRS complex and more extensive mechanical dyssynchrony at baseline. Conversely, negative responders were more frequently in New York Heart Association class IV and had a history of ventricular tachycardia (VT). Combined positive responders after CRT (+/+ responders) had more non-ischaemic aetiology, more extensive mechanical dyssynchrony at baseline, and no history of VT.\n\n\nCONCLUSION\nSub-analysis of data from PROSPECT showed that gender, aetiology of HF, QRS duration, severity of HF, a history of VT, and the presence of baseline mechanical dyssynchrony influence clinical and/or LV reverse remodelling after CRT. Although integration of information about these characteristics would improve patient selection and counselling for CRT, further randomized controlled trials are necessary prior to changing the current guidelines regarding patient selection for CRT.",
"title": ""
},
{
"docid": "76c279b79355efa4d357655e56e84f3d",
"text": "BACKGROUND\nHypertension has proven to be a strong liability with 13.5% of all mortality worldwide being attributed to elevated blood pressures in 2001. An accurate blood pressure measurement lies at the crux of an appropriate diagnosis. Despite the mercury sphygmomanometer being the gold standard, the ongoing deliberation as to whether mercury sphygmomanometers should be replaced with the automated oscillometric devices stems from the risk mercury poses to the environment.\n\n\nAIM\nThis study was performed to check the validity of automated oscillometric blood pressure measurements as compared to the manual blood pressure measurements in Karachi, Pakistan.\n\n\nMATERIAL AND METHODS\nBlood pressure was recorded in 200 individuals aged 15 and above using both, an automated oscillometric blood pressure device (Dinamap Procare 100) and a manual mercury sphygmomanometer concomitantly. Two nurses were assigned to each patient and the device, arm for taking the reading and nurses were randomly determined. SPSS version 20 was used for analysis. Mean and standard deviation of the systolic and diastolic measurements from each modality were compared to each other and P values of 0.05 or less were considered to be significant. Validation criteria of British Hypertension Society (BHS) and the US Association for the Advancement of Medical Instrumentation (AAMI) were used.\n\n\nRESULTS\nTwo hundred patients were included. The mean of the difference of systolic was 8.54 ± 9.38 while the mean of the difference of diastolic was 4.21 ± 7.88. Patients were further divided into three groups of different systolic blood pressure <= 120, > 120 to = 150 and > 150, their means were 6.27 ± 8.39 (p-value 0.175), 8.91 ± 8.96 (p-value 0.004) and 10.98 ± 10.49 (p-value 0.001) respectively. In our study 89 patients were previously diagnosed with hypertension; their difference of mean systolic was 9.43 ± 9.89 (p-value 0.000) and difference of mean diastolic was 4.26 ± 7.35 (p-value 0.000).\n\n\nCONCLUSIONS\nSystolic readings from a previously validated device are not reliable when used in the ER and they show a higher degree of incongruency and inaccuracy when they are used outside validation settings. Also, readings from the right arm tend to be more precise.",
"title": ""
},
{
"docid": "33be5718d8a60f36e5faaa0cc4f0019f",
"text": "Most of our daily activities are now moving online in the big data era, with more than 25 billion devices already connected to the Internet, to possibly over a trillion in a decade. However, big data also bears a connotation of “big brother” when personal information (such as sales transactions) is being ubiquitously collected, stored, and circulated around the Internet, often without the data owner's knowledge. Consequently, a new paradigm known as online privacy or Internet privacy is becoming a major concern regarding the privacy of personal and sensitive data.",
"title": ""
},
{
"docid": "2ee579f06ca68d13823f8576122c20fe",
"text": "Current trends in distributed denial of service (DDoS) attacks show variations in terms of attack motivation, planning, infrastructure, and scale. “DDoS-for-Hire” and “DDoS mitigation as a Service” are the two services, which are available to attackers and victims, respectively. In this work, we provide a fundamental difference between a “regular” DDoS attack and an “extreme” DDoS attack. We conduct DDoS attacks on cloud services, where having the same attack features, two different services show completely different consequences, due to the difference in the resource utilization per request. We study various aspects of these attacks and find out that the DDoS mitigation service’s performance is dependent on two factors. Gaurav Somani gaurav@curaj.ac.in Manoj Singh Gaur gaurms@mnit.ac.in Dheeraj Sanghi dheeraj@cse.iitk.ac.in Mauro Conti conti@math.unipd.it Rajkumar Buyya rbuyya@unimelb.edu.au 1 Central University of Rajasthan, Rajasthan, India 2 Malaviya National Institute of Technology, Rajasthan, India 3 Indian Institute of Technology, Kanpur, India 4 University of Padua, Padua, Italy 5 The University of Melbourne, Parkville, Australia One factor is related to the severity of the “resource-race” with the victim web-service. Second factor is “attack cooling down period” which is the time taken to bring the service availability post detection of the attack. Utilizing these two important factors, we propose a supporting framework for the DDoS mitigation services, by assisting in reducing the attack mitigation time and the overall downtime. This novel framework comprises of an affinity-based victim-service resizing algorithm to provide performance isolation, and a TCP tuning technique to quickly free the attack connections, hence minimizing the attack cooling down period. We evaluate the proposed novel techniques with real attack instances and compare various attack metrics. Results show a significant improvement to the performance of DDoS mitigation service, providing quick attack mitigation. The presence of proposed DDoS mitigation support framework demonstrated a major reduction of more than 50% in the service downtime.",
"title": ""
},
{
"docid": "43850ef433d1419ed37b7b12f3ff5921",
"text": "We have seen ten years of the application of AI planning to the problem of narrative generation in Interactive Storytelling (IS). In that time planning has emerged as the dominant technology and has featured in a number of prototype systems. Nevertheless key issues remain, such as how best to control the shape of the narrative that is generated (e.g., by using narrative control knowledge, i.e., knowledge about narrative features that enhance user experience) and also how best to provide support for real-time interactive performance in order to scale up to more realistic sized systems. Recent progress in planning technology has opened up new avenues for IS and we have developed a novel approach to narrative generation that builds on this. Our approach is to specify narrative control knowledge for a given story world using state trajectory constraints and then to treat these state constraints as landmarks and to use them to decompose narrative generation in order to address scalability issues and the goal of real-time performance in larger story domains. This approach to narrative generation is fully implemented in an interactive narrative based on the “Merchant of Venice.” The contribution of the work lies both in our novel use of state constraints to specify narrative control knowledge for interactive storytelling and also our development of an approach to narrative generation that exploits such constraints. In the article we show how the use of state constraints can provide a unified perspective on important problems faced in IS.",
"title": ""
},
{
"docid": "8bc615dfa51a9c5835660c1b0eb58209",
"text": "Large scale grid connected photovoltaic (PV) energy conversion systems have reached the megawatt level. This imposes new challenges on existing grid interface converter topologies and opens new opportunities to be explored. In this paper a new medium voltage multilevel-multistring configuration is introduced based on a three-phase cascaded H-bridge (CHB) converter and multiple string dc-dc converters. The proposed configuration enables a large increase of the total capacity of the PV system, while improving power quality and efficiency. The converter structure is very flexible and modular since it decouples the grid converter from the PV string converter, which allows to accomplish independent control goals. The main challenge of the proposed configuration is to handle the inherent power imbalances that occur not only between the different cells of one phase of the converter but also between the three phases. The control strategy to deal with these imbalances is also introduced in this paper. Simulation results of a 7-level CHB for a multistring PV system are presented to validate the proposed topology and control method.",
"title": ""
},
{
"docid": "b1746ab2946c51bcd10360d051da351f",
"text": "BACKGROUND AND OBJECTIVE\nThe ICD-9-CM adaptation of the Charlson comorbidity score has been a valuable resource for health services researchers. With the transition into ICD-10 coding worldwide, an ICD-10 version of the Deyo adaptation was developed and validated using population-based hospital data from Victoria, Australia.\n\n\nMETHODS\nThe algorithm was translated from ICD-9-CM into ICD-10-AM (Australian modification) in a multistep process. After a mapping algorithm was used to develop an initial translation, these codes were manually examined by the coding experts and a general physician for face validity. Because the ICD-10 system is country specific, our goal was to keep many of the translated code at the three-digit level for generalizability of the new index.\n\n\nRESULTS\nThere appears to be little difference in the distribution of the Charlson Index score between the two versions. A strong association between increasing index scores and mortality exists: the area under the ROC curve is 0.865 for the last year using the ICD-9-CM version and remains high, at 0.855, for the ICD-10 version.\n\n\nCONCLUSION\nThis work represents the first rigorous adaptation of the Charlson comorbidity index for use with ICD-10 data. In comparison with a well-established ICD-9-CM coding algorithm, it yields closely similar prevalence and prognosis information by comorbidity category.",
"title": ""
},
{
"docid": "035bfa3cb164cb6d10a7b496c3e74854",
"text": "Question Answering (QA) systems over Knowledge Graphs (KG) automatically answer natural language questions using facts contained in a knowledge graph. Simple questions, which can be answered by the extraction of a single fact, constitute a large part of questions asked on the web but still pose challenges to QA systems, especially when asked against a large knowledge resource. Existing QA systems usually rely on various components each specialised in solving different sub-tasks of the problem (such as segmentation, entity recognition, disambiguation, and relation classification etc.). In this work, we follow a quite different approach: We train a neural network for answering simple questions in an end-to-end manner, leaving all decisions to the model. It learns to rank subject-predicate pairs to enable the retrieval of relevant facts given a question. The network contains a nested word/character-level question encoder which allows to handle out-of-vocabulary and rare word problems while still being able to exploit word-level semantics. Our approach achieves results competitive with state-of-the-art end-to-end approaches that rely on an attention mechanism.",
"title": ""
},
{
"docid": "09623c821f05ffb7840702a5869be284",
"text": "Area-restricted search (ARS) is a foraging strategy used by many animals to locate resources. The behavior is characterized by a time-dependent reduction in turning frequency after the last resource encounter. This maximizes the time spent in areas in which resources are abundant and extends the search to a larger area when resources become scarce. We demonstrate that dopaminergic and glutamatergic signaling contribute to the neural circuit controlling ARS in the nematode Caenorhabditis elegans. Ablation of dopaminergic neurons eliminated ARS behavior, as did application of the dopamine receptor antagonist raclopride. Furthermore, ARS was affected by mutations in the glutamate receptor subunits GLR-1 and GLR-2 and the EAT-4 glutamate vesicular transporter. Interestingly, preincubation on dopamine restored the behavior in worms with defective dopaminergic signaling, but not in glr-1, glr-2, or eat-4 mutants. This suggests that dopaminergic and glutamatergic signaling function in the same pathway to regulate turn frequency. Both GLR-1 and GLR-2 are expressed in the locomotory control circuit that modulates the direction of locomotion in response to sensory stimuli and the duration of forward movement during foraging. We propose a mechanism for ARS in C. elegans in which dopamine, released in response to food, modulates glutamatergic signaling in the locomotory control circuit, thus resulting in an increased turn frequency.",
"title": ""
},
{
"docid": "d103d856c51a4744d563dff2eff224a7",
"text": "Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles.",
"title": ""
},
{
"docid": "ee5fbcc34536f675cadb8e20eb6eb520",
"text": "This work addresses employing direct and indirect discretization methods to obtain a rational discrete approximation of continuous time parallel fractional PID controllers. The different approaches are illustrated by implementing them on an example.",
"title": ""
},
{
"docid": "d7aeb8de7bf484cbaf8e23fcf675d002",
"text": "One method for detecting fraud is to check for suspicious changes in user behavior. This paper proposes a novel method, built upon ontology and ontology instance similarity. Ontology is now widely used to enable knowledge sharing and reuse, so some personality ontologies can be easily used to present user behavior. By measure the similarity of ontology instances, we can determine whether an account is defrauded. This method lows the data model cost and make the system very adaptive to different applications.",
"title": ""
},
{
"docid": "1c4e71d00521219717607cbef90b5bec",
"text": "The design of security for cyber-physical systems must take into account several characteristics common to such systems. Among these are feedback between the cyber and physical environment, distributed management and control, uncertainty, real-time requirements, and geographic distribution. This paper discusses these characteristics and suggests a design approach that better integrates security into the core design of the system. A research roadmap is presented that highlights some of the missing pieces needed to enable such an approach. 1. What is a Cyber-Physical-System? The term cyber-physical system has been applied to many problems, ranging from robotics, through SCADA, and distributed control systems. Not all cyber-physical systems involve critical infrastructure, but there are common elements that change the nature of the solutions that must be considered when securing cyber-physical systems. First, the extremely critical nature of activities performed by some cyber-physical systems means that we need security that works, and that by itself means we need something different. All kidding aside, there are fundamental system differences in cyber-physical systems that will force us to look at security in ways more closely tied to the physical application. It is my position that by focusing on these differences we can see where new (or rediscovered) approaches are needed, and that by building systems that support the inclusion of security as part of the application architecture, we can improve the security of both cyber-physical systems, where such an approach is most clearly warranted, as well as improve the security of cyber-only systems, where such an approach is more easily ignored. In this position paper I explain the characteristics of cyber-physical systems that must drive new research in security. I discuss the security problem areas that need attention because of these characteristics and I describe a design methodology for security that provides for better integration of security design with application design. Finally, I suggest some of the components of future systems that can help us include security as a focusing issue in the architectural design of critical applications.",
"title": ""
},
{
"docid": "8adc8d2bf7f26d43ed0656126f50566a",
"text": "Framing is a potentially useful paradigm for examining the strategic creation of public relations messages and audience responses. Based on a literature review across disciplines, this article identifies 7 distinct types of framing applicable to public relations. These involve the framing of situations, attributes, choices, actions, issues, responsibility, and news. Potential applications for public relations practice and research are discussed.",
"title": ""
},
{
"docid": "1fa7c954f5e352679c33d8946f4cac4e",
"text": "In some cases, such as in the estimation of impulse responses, it has been found that for plausible sample sizes the coverage accuracy of single bootstrap confidence intervals can be poor. The error in the coverage probability of single bootstrap confidence intervals may be reduced by the use of double bootstrap confidence intervals. The computer resources required for double bootstrap confidence intervals are often prohibitive, especially in the context of Monte Carlo studies. Double bootstrap confidence intervals can be estimated using computational algorithms incorporating simple deterministic stopping rules that avoid unnecessary computations. These algorithms may make the use and Monte Carlo evaluation of double bootstrap confidence intervals feasible in cases where otherwise they would not be feasible. The efficiency gains due to the use of these algorithms are examined by means of a Monte Carlo study for examples of confidence intervals for a mean and for the cumulative impulse response in a second order autoregressive model.",
"title": ""
},
{
"docid": "4c0dc05d6571a5411be60320893c65db",
"text": "Online labor markets, such as Amazon's Mechanical Turk, have been used to crowdsource simple, short tasks like image labeling and transcription. However, expert knowledge is often lacking in such markets, making it impossible to complete certain classes of tasks. In this work we introduce an alternative mechanism for crowdsourcing tasks that require specialized knowledge or skill: communitysourcing --- the use of physical kiosks to elicit work from specific populations. We investigate the potential of communitysourcing by designing, implementing and evaluating Umati: the communitysourcing vending machine. Umati allows users to earn credits by performing tasks using a touchscreen attached to the machine. Physical rewards (in this case, snacks) are dispensed through traditional vending mechanics. We evaluated whether communitysourcing can accomplish expert work by using Umati to grade Computer Science exams. We placed Umati in a university Computer Science building, targeting students with grading tasks for snacks. Over one week, 328 unique users (302 of whom were students) completed 7771 tasks (7240 by students). 80% of users had never participated in a crowdsourcing market before. We found that Umati was able to grade exams with 2% higher accuracy (at the same price) or at 33% lower cost (at equivalent accuracy) than traditional single-expert grading. Mechanical Turk workers had no success grading the same exams. These results indicate that communitysourcing can successfully elicit high-quality expert work from specific communities.",
"title": ""
},
{
"docid": "aaf3f18581f141355a5865883a30759a",
"text": "Matrix factorization is a fundamental problem that is often encountered in many computer vision and machine learning tasks. In recent years, enhancing the robustness of matrix factorization methods has attracted much attention in the research community. To benefit from the strengths of full Bayesian treatment over point estimation, we propose here a full Bayesian approach to robust matrix factorization. For the generative process, the model parameters have conjugate priors and the likelihood (or noise model) takes the form of a Laplace mixture. For Bayesian inference, we devise an efficient sampling algorithm by exploiting a hierarchical view of the Laplace distribution. Besides the basic model, we also propose an extension which assumes that the outliers exhibit spatial or temporal proximity as encountered in many computer vision applications. The proposed methods give competitive experimental results when compared with several state-of-the-art methods on some benchmark image and video processing tasks.",
"title": ""
},
{
"docid": "d4dc33b15df0a27259180fef3c28b546",
"text": "Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert’s knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31% in terms of accuracy. Prediction error rate decreases from 1.83% to 0.69%, i.e., it decreases by 1.14%, or 62.3% relatively compared with other methods that use predefined feature set (Table 3).",
"title": ""
},
{
"docid": "a7d7c7ae9da5936f050443f684f48916",
"text": "There is growing evidence for the presence of viable microorganisms in geological salt formations that are millions of years old. It is still not known, however, whether these bacteria are dormant organisms that are themselves millions of years old or whether the salt crystals merely provide a habitat in which contemporary microorganisms can grow, perhaps interspersed with relatively short periods of dormancy (McGenity et al. 2000). Vreeland, Rosenzweig and Powers (2000) have recently reported the isolation and growth of a halotolerant spore-formingBacillus species from a brine inclusion within a 250-Myr-old salt crystal from the Permian Salado Formation in New Mexico. This bacterium, Bacillus strain 2-9-3, was informally christened Bacillus permians, and a 16S ribosomal RNA gene was sequenced and deposited in GenBank under the name B. permians (accession number AF166093). It has been claimed thatB. permians was trapped inside the salt crystal 250 MYA and survived within the crystal until the present, most probably as a spore. Serious doubts have been raised concerning the possibility of spore survival for 250 Myr (Tomas Lindahl, personal communication), mostly because spores contain no active DNA repair enzymes, so the DNA is expected to decay into small fragments due to such factors as the natural radioactive radiation in the soil, and the bacterium is expected to lose its viability within at most several hundred years (Lindahl 1993). In this note, we apply theproof-of-the-pudding-is-in-the-eating principle to test whether the newly reported B. permians 16S ribosomal RNA gene sequence is ancient or not. There are several reasons to doubt the antiquity of B. permians. The first concerns the extraordinary similarity of its 16S rRNA gene sequence to that of Bacillus marismortui. Bacillus marismortui was described by Arahal et al. (1999) as a moderately halophilic species from the Dead Sea and was later renamed Salibacillus marismortui (Arahal et al. 2000). TheB. permians sequence differs from that of S. marismortui by only one transition and one transversion out of the 1,555 aligned and unambiguously determined nucleotides. In comparison, the 16S rRNA gene fromStaphylococcus succinus, which was claimed to be ‘‘25–35 million years old’’ (Lambert et al. 1998), differs from its homolog in its closest present-day relative (a urinary pathogen called Staphylococcus saprophyticus) by 19 substitutions out of 1,525 aligned nucleotides. Using Kimura’s (1980) two-parameter model, the difference between the B. permians and S. marismortui sequences translates into 1.3",
"title": ""
}
] |
scidocsrr
|
a985dd470a44af9003a57e24ab4066bc
|
Leveraging mid-level deep representations for predicting face attributes in the wild
|
[
{
"docid": "af56806a30f708cb0909998266b4d8c1",
"text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.",
"title": ""
}
] |
[
{
"docid": "73d09f005f9335827493c3c47d02852b",
"text": "Multiprotocol Label Switched Networks need highly intelligent controls to manage high volume traffic due to issues of traffic congestion and best path selection. The work demonstrated in this paper shows results from simulations for building optimal fuzzy based algorithm for traffic splitting and congestion avoidance. The design and implementation of Fuzzy based software defined networking is illustrated by introducing the Fuzzy Traffic Monitor in an ingress node. Finally, it displays improvements in the terms of mean delay (42.0%) and mean loss rate (2.4%) for Video Traffic. Then, the resu1t shows an improvement in the terms of mean delay (5.4%) and mean loss rate (3.4%) for Data Traffic and an improvement in the terms of mean delay(44.9%) and mean loss rate(4.1%) for Voice Traffic as compared to default MPLS implementation. Keywords—Multiprotocol Label Switched Networks; Fuzzy Traffic Monitor; Network Simulator; Ingress; Traffic Splitting; Fuzzy Logic Control System; Label setup System; Traffic Splitting System",
"title": ""
},
{
"docid": "2746d538694db54381639e5e5acdb4ca",
"text": "In the present research, the aqueous stability of leuprolide acetate (LA) in phosphate buffered saline (PBS) medium was studied (pH = 2.0-7.4). For this purpose, the effect of temperature, dissolved oxygen and pH on the stability of LA during 35 days was investigated. Results showed that the aqueous stability of LA was higher at low temperatures. Degassing of the PBS medium partially increased the stability of LA at 4 °C, while did not change at 37 °C. The degradation of LA was accelerated at lower pH values. In addition, complexes of LA with different portions of β-cyclodextrin (β-CD) were prepared through freeze-drying procedure and characterized by Fourier transform infrared (FTIR) and differential scanning calorimetry (DSC) analyses. Studying their aqueous stability at various pH values (2.0-7.4) showed LA/β-CD complexes exhibited higher stability when compared with LA at all pH values. The stability of complexes was also improved by increasing the portion of LA/β-CD up to 1/10.",
"title": ""
},
{
"docid": "997228bb93bc851498877047fec4a42f",
"text": "A method with clear guidelines is presented to design compact planar phase shifters with ultra-wideband (UWB) characteristics. The proposed method exploits broadside coupling between top and bottom elliptical microstrip patches via an elliptical slot located in the mid layer, which forms the ground plane. A theoretical model is used to analyze performance of the proposed devices. The model shows that it is possible to design high-performance UWB phase shifters for the 25deg-48deg range using the proposed structure. The method is used to design 30deg and 45deg phase shifters that have compact size, i.e., 2.5 cm times 2 cm. The simulated and measured results show that the designed phase shifters achieve better than plusmn3deg differential phase stability, less than 1-dB insertion loss, and better than 10-dB return loss across the UWB, i.e., 3.1-10.6 GHz.",
"title": ""
},
{
"docid": "c25144cf41462c58820fdcd3652e9fec",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.02.043 * Corresponding author. Tel.: +3",
"title": ""
},
{
"docid": "714242b8967ef68c022e568ef2fe01dd",
"text": "Visual localization is a key step in many robotics pipelines, allowing the robot to (approximately) determine its position and orientation in the world. An efficient and scalable approach to visual localization is to use image retrieval techniques. These approaches identify the image most similar to a query photo in a database of geo-tagged images and approximate the query’s pose via the pose of the retrieved database image. However, image retrieval across drastically different illumination conditions, e.g. day and night, is still a problem with unsatisfactory results, even in this age of powerful neural models. This is due to a lack of a suitably diverse dataset with true correspondences to perform end-to-end learning. A recent class of neural models allows for realistic translation of images among visual domains with relatively little training data and, most importantly, without ground-truth pairings. In this paper, we explore the task of accurately localizing images captured from two traversals of the same area in both day and night. We propose ToDayGAN – a modified imagetranslation model to alter nighttime driving images to a more useful daytime representation. We then compare the daytime and translated night images to obtain a pose estimate for the night image using the known 6-DOF position of the closest day image. Our approach improves localization performance by over 250% compared the current state-of-the-art, in the context of standard metrics in multiple categories.",
"title": ""
},
{
"docid": "e7232201e629e45b1f8f9a49cb1fdedf",
"text": "Semantic Data Mining refers to the data mining tasks that systematically incorporate domain knowledge, especially formal semantics, into the process. In the past, many research efforts have attested the benefits of incorporating domain knowledge in data mining. At the same time, the proliferation of knowledge engineering has enriched the family of domain knowledge, especially formal semantics and Semantic Web ontologies. Ontology is an explicit specification of conceptualization and a formal way to define the semantics of knowledge and data. The formal structure of ontology makes it a nature way to encode domain knowledge for the data mining use. In this survey paper, we introduce general concepts of semantic data mining. We investigate why ontology has the potential to help semantic data mining and how formal semantics in ontologies can be incorporated into the data mining process. We provide detail discussions for the advances and state of art of ontology-based approaches and an introduction of approaches that are based on other form of knowledge representations.",
"title": ""
},
{
"docid": "89bcf5b0af2f8bf6121e28d36ca78e95",
"text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4",
"title": ""
},
{
"docid": "ec4bde3a67cccca41ca3e7af00072f1c",
"text": "Single-nucleus RNA sequencing (sNuc-seq) profiles RNA from tissues that are preserved or cannot be dissociated, but it does not provide high throughput. Here, we develop DroNc-seq: massively parallel sNuc-seq with droplet technology. We profile 39,111 nuclei from mouse and human archived brain samples to demonstrate sensitive, efficient, and unbiased classification of cell types, paving the way for systematic charting of cell atlases.",
"title": ""
},
{
"docid": "9c1f7c4fc30a10f306354f83f6b8d9cd",
"text": "A unified and powerful approach is presented for devising polynomial approximation schemes for many strongly NP-complete problems. Such schemes consist of families of approximation algorithms for each desired performance bound on the relative error ε > &Ogr;, with running time that is polynomial when ε is fixed. Though the polynomiality of these algorithms depends on the degree of approximation ε being fixed, they cannot be improved, owing to a negative result stating that there are no fully polynomial approximation schemes for strongly NP-complete problems unless NP = P.\nThe unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems. The method of using the technique and how it varies with problem parameters are illustrated. A similar technique, independently devised by B. S. Baker, was shown to be applicable for covering and packing problems on planar graphs.",
"title": ""
},
{
"docid": "bd72a921c7bfa4a7db8ca9dd8715fa45",
"text": "Augmented Reality (AR) is growing rapidly and becoming a mature and robust technology, which combines virtual information with the real environment and real-time performance. It is important to ensure the acceptance and success of augmented reality systems. With the growth of elderly users, evidence shows potential trends for AR systems to support the elderly, including transport, ageing in place, entertainment and training. However, there is a lack of research to provide the theoretical framework or AR design principles to support designers when developing suitable AR applications for specific populations (e.g. older people). In my PhD thesis, I will focus on the possibility of developing and applying AR design principles to support the design of applications that address older people's requirements. In this paper, I first discuss the architecture of augmented reality and identify the relationship between different elements. Secondly, the relevant literature has been reviewed in terms of design challenges of AR and design principles. Thirdly, I formulate the five initial design principles as the fundamental work of my PhD. It is expected that design principles could help AR designers to explore quality design alternatives, which could potentially benefit the ageing population. Fourthly, I identify the AR pillbox as an example to explain how design principles can be applied to AR applications. In terms of the methodology, preparation, refinement and validation are the three main stages to achieve the research goal. Preparation stage aims to generate the preliminary AR design principles and identify the relevant scenarios that might assist the designers to understand the principles and explore the design alternatives. In the stages of refinement, a half-day workshop has been conducted to explore different design issues based on different scenarios and refine the preliminary design principles. After that, a new set of design principles will be formulated. The final stage is to validate the effectiveness of new design principles based on the previous workshop’s feedback.",
"title": ""
},
{
"docid": "2c68945d68f8ccf90648bec7fd5b0547",
"text": "The number of seniors and other people needing daily assistance continues to increase, but the current human resources available to achieve this in the coming years will certainly be insufficient. To remedy this situation, smart habitats have emerged as an innovative avenue for supporting needs of daily assistance. Smart homes aim to provide cognitive assistance in decision making by giving hints, suggestions, and reminders, with different kinds of effectors, to residents. To implement such technology, the first challenge to overcome is the recognition of ongoing activity. Some researchers have proposed solutions based on binary sensors or cameras, but these types of approaches infringed on residents' privacy. A new affordable activity-recognition system based on passive RFID technology can detect errors related to cognitive impairment. The entire system relies on an innovative model of elliptical trilateration with several filters, as well as on an ingenious representation of activities with spatial zones. The authors have deployed the system in a real smart-home prototype; this article renders the results of a complete set of experiments conducted on this new activity-recognition system with real scenarios.",
"title": ""
},
{
"docid": "24ecf1119592cc5496dc4994d463eabe",
"text": "To improve data availability and resilience MapReduce frameworks use file systems that replicate data uniformly. However, analysis of job logs from a large production cluster shows wide disparity in data popularity. Machines and racks storing popular content become bottlenecks; thereby increasing the completion times of jobs accessing this data even when there are machines with spare cycles in the cluster. To address this problem, we present Scarlett, a system that replicates blocks based on their popularity. By accurately predicting file popularity and working within hard bounds on additional storage, Scarlett causes minimal interference to running jobs. Trace driven simulations and experiments in two popular MapReduce frameworks (Hadoop, Dryad) show that Scarlett effectively alleviates hotspots and can speed up jobs by 20.2%.",
"title": ""
},
{
"docid": "888ca06bc504dd82308a4ecc462e869b",
"text": "This paper describes the conceptual design of an arm (right or left) powered exoframe (exoskeleton) which can be used in rehabilitation or by an army soldier who are debilitated to move their hands freely, and to lift the weight .This machine is designed for the application of teleoperation, virtual reality, military and rehabilitation. The option is put forward for a mechanical structure kinematical equivalent to the structure of the human arm. The elbow joint rotation is about -90 to 70 degrees. This arm can be used in both hands. This is a wearable robot i.e. mechatronic system with Velcro straps along with that it is a light weight device. It will also work mechanically with a push of a button as well as electrically with the help of solenoidal valve. Here the energy conversion is done using Pneumatic Cylinder (double acting) which is given the flow of compressed air through Solenoidal Valve, which control direction of flow and movement of piston.",
"title": ""
},
{
"docid": "957a179c41a641f337b89dbfdc8ea1a9",
"text": "Medical staff around the world must take reasonable steps to identify newborns and infants clearly, so as to prevent mix-ups, and to ensure the correct medication reaches the correct child. Footprints are frequently taken despite verification with footprints being challenging due to strong noise. The noise is introduced by the tininess of the structures, movement during capture, and the infant's rapid growth. In this article we address the image processing part of the problem and introduce a novel algorithm for the extraction of creases from infant footprints. The algorithm uses directional filtering on different resolution levels, morphological processing, and block-wise crease line reconstruction. We successfully test our method on noise-affected infant footprints taken from the same infants at different ages.",
"title": ""
},
{
"docid": "7c23d90cd8e7e5223a13882833fa7c66",
"text": "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.",
"title": ""
},
{
"docid": "6bc611936d412dde15999b2eb179c9e2",
"text": "Smith-Lemli-Opitz syndrome, a severe developmental disorder associated with multiple congenital anomalies, is caused by a defect of cholesterol biosynthesis. Low cholesterol and high concentrations of its direct precursor, 7-dehydrocholesterol, in plasma and tissues are the diagnostic biochemical hallmarks of the syndrome. The plasma sterol concentrations correlate with severity and disease outcome. Mutations in the DHCR7 gene lead to deficient activity of 7-dehydrocholesterol reductase (DHCR7), the final enzyme of the cholesterol biosynthetic pathway. The human DHCR7 gene is localised on chromosome 11q13 and its structure has been characterized. Ninetyone different mutations in the DHCR7 gene have been published to date. This paper is a review of the clinical, biochemical and molecular genetic aspects.",
"title": ""
},
{
"docid": "8eb5e5d7c224782506aba37dcb91614f",
"text": "With adolescents’ frequent use of social media, electronic bullying has emerged as a powerful platform for peer victimization. The present two studies explore how adolescents perceive electronic vs. traditional bullying in emotional impact and strategic responses. In Study 1, 97 adolescents (mean age = 15) viewed hypothetical peer victimization scenarios, in parallel electronic and traditional forms, with female characters experiencing indirect relational aggression and direct verbal aggression. In Study 2, 47 adolescents (mean age = 14) viewed the direct verbal aggression scenario from Study 1, and a new scenario, involving male characters in the context of direct verbal aggression. Participants were asked to imagine themselves as the victim in all scenarios and then rate their emotional reactions, strategic responses, and goals for the outcome. Adolescents reported significant negative emotions and disruptions in typical daily activities as the victim across divergent bullying scenarios. In both studies few differences emerged when comparing electronic to traditional bullying, suggesting that online and off-line bullying are subtypes of peer victimization. There were expected differences in strategic responses that fit the medium of the bullying. Results also suggested that embarrassment is a common and highly relevant negative experience in both indirect relational and direct verbal aggression among",
"title": ""
},
{
"docid": "6dfb4c016db41a27587ef08011a7cf0e",
"text": "The objective of this work is to detect shadows in images. We pose this as the problem of labeling image regions, where each region corresponds to a group of superpixels. To predict the label of each region, we train a kernel Least-Squares Support Vector Machine (LSSVM) for separating shadow and non-shadow regions. The parameters of the kernel and the classifier are jointly learned to minimize the leave-one-out cross validation error. Optimizing the leave-one-out cross validation error is typically difficult, but it can be done efficiently in our framework. Experiments on two challenging shadow datasets, UCF and UIUC, show that our region classifier outperforms more complex methods. We further enhance the performance of the region classifier by embedding it in a Markov Random Field (MRF) framework and adding pairwise contextual cues. This leads to a method that outperforms the state-of-the-art for shadow detection. In addition we propose a new method for shadow removal based on region relighting. For each shadow region we use a trained classifier to identify a neighboring lit region of the same material. Given a pair of lit-shadow regions we perform a region relighting transformation based on histogram matching of luminance values between the shadow region and the lit region. Once a shadow is detected, we demonstrate that our shadow removal approach produces results that outperform the state of the art by evaluating our method using a publicly available benchmark dataset.",
"title": ""
},
{
"docid": "15d932b1344d48f13dfbb5e7625b22ad",
"text": "Predictive modeling of human or humanoid movement becomes increasingly complex as the dimensionality of those movements grows. Dynamic Movement Primitives (DMP) have been shown to be a powerful method of representing such movements, but do not generalize well when used in configuration or task space. To solve this problem we propose a model called autoencoded dynamic movement primitive (AE-DMP) which uses deep autoencoders to find a representation of movement in a latent feature space, in which DMP can optimally generalize. The architecture embeds DMP into such an autoencoder and allows the whole to be trained as a unit. To further improve the model for multiple movements, sparsity is added for the feature layer neurons; therefore, various movements can be observed clearly in the feature space. After training, the model finds a single hidden neuron from the sparsity that can efficiently generate new movements. Our experiments clearly demonstrate the efficiency of missing data imputation using 50-dimensional human movement data.",
"title": ""
}
] |
scidocsrr
|
5a1059d4321e5bbc017d81813556d3ad
|
Forces acting on a biped robot. Center of pressure-zero moment point
|
[
{
"docid": "1db57f3b594afa363c81c8e63cc82c3c",
"text": "This paper newly considers the ZMP(Zero Moment Point) of a humanoid robot under arm/leg coordination. By considering the infinitesimal displacement and the moment acting on the convex hull of the supporting points, we show that our method for determining the region of ZMP can be applicable to several cases of the arm/leg coordination tasks. We first express two kinds of ZMPs for such coordination tasks, i.e., the conventional ZMP, and the “Generalized Zero Moment Point (GZMP)” which is a generalization of the ZMP to the arm/leg coordination tasks. By projecting the edges of the convex hull of the supporting points onto the floor, we show that the position and the region of the GZMP for keeping the dynamical balance can be uniquely obtained. The effectiveness of the proposed method is shown by simulation results(see video).",
"title": ""
}
] |
[
{
"docid": "5b7ff9036a43b32cc82ca04bdbfd9fb1",
"text": "Cloud computing provides computing resources as a service over a network. As rapid application of this emerging technology in real world, it becomes more and more important how to evaluate the performance and security problems that cloud computing confronts. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. In this paper, to the best of our knowledge, we review the existing results on modeling and simulation of cloud computing. We start from reviewing the basic concepts of cloud computing and its security issues, and subsequently review the existing cloud computing simulators. Furthermore, we indicate that there exist two types of cloud computing simulators, that is, simulators just based on software and simulators based on both software and hardware. Finally, we analyze and compare features of the existing cloud computing simulators.",
"title": ""
},
{
"docid": "8dce819cc31cf4899cf4bad2dd117dc1",
"text": "BACKGROUND\nCaffeine and sodium bicarbonate ingestion have been suggested to improve high-intensity intermittent exercise, but it is unclear if these ergogenic substances affect performance under provoked metabolic acidification. To study the effects of caffeine and sodium bicarbonate on intense intermittent exercise performance and metabolic markers under exercise-induced acidification, intense arm-cranking exercise was performed prior to intense intermittent running after intake of placebo, caffeine and sodium bicarbonate.\n\n\nMETHODS\nMale team-sports athletes (n = 12) ingested sodium bicarbonate (NaHCO3; 0.4 g.kg(-1) b.w.), caffeine (CAF; 6 mg.kg(-1) b.w.) or placebo (PLA) on three different occasions. Thereafter, participants engaged in intense arm exercise prior to the Yo-Yo intermittent recovery test level-2 (Yo-Yo IR2). Heart rate, blood lactate and glucose as well as rating of perceived exertion (RPE) were determined during the protocol.\n\n\nRESULTS\nCAF and NaHCO3 elicited a 14 and 23% improvement (P < 0.05), respectively, in Yo-Yo IR2 performance, post arm exercise compared to PLA. The NaHCO3 trial displayed higher [blood lactate] (P < 0.05) compared to CAF and PLA (10.5 ± 1.9 vs. 8.8 ± 1.7 and 7.7 ± 2.0 mmol.L(-1), respectively) after the Yo-Yo IR2. At exhaustion CAF demonstrated higher (P < 0.05) [blood glucose] compared to PLA and NaHCO3 (5.5 ± 0.7 vs. 4.2 ± 0.9 vs. 4.1 ± 0.9 mmol.L(-1), respectively). RPE was lower (P < 0.05) during the Yo-Yo IR2 test in the NaHCO3 trial in comparison to CAF and PLA, while no difference in heart rate was observed between trials.\n\n\nCONCLUSIONS\nCaffeine and sodium bicarbonate administration improved Yo-Yo IR2 performance and lowered perceived exertion after intense arm cranking exercise, with greater overall effects of sodium bicarbonate intake.",
"title": ""
},
{
"docid": "057b397d3b72a30352697ce0940e490a",
"text": "Recent events of multiple earthquakes in Nepal, Italy and New Zealand resulting loss of life and resources bring our attention to the ever growing significance of disaster management, especially in the context of large scale nature disasters such as earthquake and Tsunami. In this paper, we focus on how disaster communication system can benefit from recent advances in wireless communication technologies especially mobile technologies and devices. The paper provides an overview of how the new generation of telecommunications and technologies such as 4G/LTE, Device to Device (D2D) and 5G can improve the potential of disaster networks. D2D is a promising technology for 5G networks, providing high data rates, increased spectral and energy efficiencies, reduced end-to-end delay and transmission power. We examine a scenario of multi-hop D2D communications where one UE may help other UEs to exchange information, by utilizing cellular network technique. Results show the average energy-efficiency spectral- efficiency of these transmission types are enhanced when the number of hops used in multi-hop links increases. The effect of resource group allocation is also pointed out for efficient design of system.",
"title": ""
},
{
"docid": "db2160b80dd593c33661a16ed2e404d1",
"text": "Steganalysis tools play an important part in saving time and providing new angles of attack for forensic analysts. StegExpose is a solution designed for use in the real world, and is able to analyse images for LSB steganography in bulk using proven attacks in a time efficient manner. When steganalytic methods are combined intelligently, they are able generate even more accurate results. This is the prime focus of StegExpose.",
"title": ""
},
{
"docid": "252f7393393a7ef16eda8388d601ef00",
"text": "In computer vision, moving object detection and tracking methods are the most important preliminary steps for higher-level video analysis applications. In this frame, background subtraction (BS) method is a well-known method in video processing and it is based on frame differencing. The basic idea is to subtract the current frame from a background image and to classify each pixel either as foreground or background by comparing the difference with a threshold. Therefore, the moving object is detected and tracked by using frame differencing and by learning an updated background model. In addition, simulated annealing (SA) is an optimization technique for soft computing in the artificial intelligence area. The p-median problem is a basic model of discrete location theory of operational research (OR) area. It is a NP-hard combinatorial optimization problem. The main aim in the p-median problem is to find p number facility locations, minimize the total weighted distance between demand points (nodes) and the closest facilities to demand points. The SA method is used to solve the p-median problem as a probabilistic metaheuristic. In this paper, an SA-based hybrid method called entropy-based SA (EbSA) is developed for performance optimization of BS, which is used to detect and track object(s) in videos. The SA modification to the BS method (SA–BS) is proposed in this study to determine the optimal threshold for the foreground-background (i.e., bi-level) segmentation and to learn background model for object detection. At these segmentation and learning stages, all of the optimization problems considered in this study are taken as p-median problems. Performances of SA–BS and regular BS methods are measured using four videoclips. Therefore, these results are evaluated quantitatively as the overall results of the given method. The obtained performance results and statistical analysis (i.e., Wilcoxon median test) show that our proposed method is more preferable than regular BS method. Meanwhile, the contribution of this",
"title": ""
},
{
"docid": "7fa9bacbb6b08065ecfe0530f082a391",
"text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.",
"title": ""
},
{
"docid": "103ec725b4c07247f1a8884610ea0e42",
"text": "In this paper we have introduced the notion of distance between two single valued neutrosophic sets and studied its properties. We have also defined several similarity measures between them and investigated their characteristics. A measure of entropy of a single valued neutrosophic set has also been introduced.",
"title": ""
},
{
"docid": "b3209409f5fa834803673ed39eb0f2a1",
"text": "Three-dimensional (3-D) urban models are an integral part of numerous applications, such as urban planning and performance simulation, mapping and visualization, emergency response training and entertainment, among others. We consolidate various algorithms proposed for reconstructing 3-D models of urban objects from point clouds. Urban models addressed in this review include buildings, vegetation, utilities such as roads or power lines and free-form architectures such as curved buildings or statues, all of which are ubiquitous in a typical urban scenario. While urban modeling, building reconstruction, in particular, clearly demand specific traits in the models, such as regularity, symmetry, and repetition; most of the traditional and state-of-the-art 3-D reconstruction algorithms are designed to address very generic objects of arbitrary shapes and topology. The recent efforts in the urban reconstruction arena, however, strive to accommodate the various pressing needs of urban modeling. Strategically, urban modeling research nowadays focuses on the usage of specialized priors, such as global regularity, Manhattan-geometry or symmetry to aid the reconstruction, or efficient adaptation of existing reconstruction techniques to the urban modeling pipeline. Aimed at an in-depth exploration of further possibilities, we review the existing urban reconstruction algorithms, prevalent in computer graphics, computer vision and photogrammetry disciplines, evaluate their performance in the architectural modeling context, and discuss the adaptability of generic mesh reconstruction techniques to the urban modeling pipeline. In the end, we suggest a few directions of research that may be adopted to close in the technology gaps.",
"title": ""
},
{
"docid": "c38a6685895c23620afb6570be4c646b",
"text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.",
"title": ""
},
{
"docid": "295ec5187615caec8b904c81015f4999",
"text": "As modern 64-bit x86 processors no longer support the segmentation capabilities of their 32-bit predecessors, most research projects assume that strong in-process memory isolation is no longer an affordable option. Instead of strong, deterministic isolation, new defense systems therefore rely on the probabilistic pseudo-isolation provided by randomization to \"hide\" sensitive (or safe) regions. However, recent attacks have shown that such protection is insufficient; attackers can leak these safe regions in a variety of ways.\n In this paper, we revisit isolation for x86-64 and argue that hardware features enabling efficient deterministic isolation do exist. We first present a comprehensive study on commodity hardware features that can be repurposed to isolate safe regions in the same address space (e.g., Intel MPX and MPK). We then introduce MemSentry, a framework to harden modern defense systems with commodity hardware features instead of information hiding. Our results show that some hardware features are more effective than others in hardening such defenses in each scenario and that features originally conceived for other purposes (e.g., Intel MPX for bounds checking) are surprisingly efficient at isolating safe regions compared to their software equivalent (i.e., SFI).",
"title": ""
},
{
"docid": "131a866cba7a8b2e4f66f2496a80cb41",
"text": "The Python language is highly dynamic, most notably due to late binding. As a consequence, programs using Python typically run an order of magnitude slower than their C counterpart. It is also a high level language whose semantic can be made more static without much change from a user point of view in the case of mathematical applications. In that case, the language provides several vectorization opportunities that are studied in this paper, and evaluated in the context of Pythran, an ahead-of-time compiler that turns Python module into C++ meta-programs.",
"title": ""
},
{
"docid": "09623c821f05ffb7840702a5869be284",
"text": "Area-restricted search (ARS) is a foraging strategy used by many animals to locate resources. The behavior is characterized by a time-dependent reduction in turning frequency after the last resource encounter. This maximizes the time spent in areas in which resources are abundant and extends the search to a larger area when resources become scarce. We demonstrate that dopaminergic and glutamatergic signaling contribute to the neural circuit controlling ARS in the nematode Caenorhabditis elegans. Ablation of dopaminergic neurons eliminated ARS behavior, as did application of the dopamine receptor antagonist raclopride. Furthermore, ARS was affected by mutations in the glutamate receptor subunits GLR-1 and GLR-2 and the EAT-4 glutamate vesicular transporter. Interestingly, preincubation on dopamine restored the behavior in worms with defective dopaminergic signaling, but not in glr-1, glr-2, or eat-4 mutants. This suggests that dopaminergic and glutamatergic signaling function in the same pathway to regulate turn frequency. Both GLR-1 and GLR-2 are expressed in the locomotory control circuit that modulates the direction of locomotion in response to sensory stimuli and the duration of forward movement during foraging. We propose a mechanism for ARS in C. elegans in which dopamine, released in response to food, modulates glutamatergic signaling in the locomotory control circuit, thus resulting in an increased turn frequency.",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
},
{
"docid": "00223ccf5b5aebfc23c76afb7192e3f7",
"text": "Computer Security System / technology have passed through several changes. The trends have been from what you know (e.g. password, PIN, etc) to what you have (ATM card, Driving License, etc) and presently to who you are (Biometry) or combinations of two or more of the trios. This technology (biometry) has come to solve the problems identified with knowledge-based and token-based authentication systems. It is possible to forget your password and what you have can as well be stolen. The security of determining who you are is referred to as BIOMETRIC. Biometric, in a nutshell, is the use of your body as password. This paper explores the various methods of biometric identification that have evolved over the years and the features used for each modality.",
"title": ""
},
{
"docid": "f67990c307da0b95628441e11ddfb70b",
"text": "I shall present an overview of the Java language and a brief description of the Java virtual machine|this will not be a tutorial. For a good Java tutorial and reference, I refer you to [Fla96]. There will be a brief summary and the odd snide remark about network computers. Chapter 1 History Java started o as a control language for embedded systems (c.1990)|the idea was to unite the heterogenous microcontrollers used in embedded appliances (initially mainly consumer electronics) with a common language (the Java Virtual Machine (JVM)) and API so that manufacturers wouldn't have to change their software whenever a new microcontroller came on the market. It was realised that the JVM would work just as well over the internet, and the JVM (and the Java language that was developed in association with it) was pushed as a vehicle for active web content. More recently, the Java bandwagon has acquired such initiatives as the network computer, personal and embedded Java (which take Java back to its roots as an embedded control system), and are being pushed as tools for developing `serious' applications. Unfortunately, the Java bandwagon has also acquired a tremendous amount of hype, snake-oil, and general misinformation. It is being used by Sun as a weapon in its endless and byzantine war with Microsoft, and has pioneered the industry acceptance of vapourware sales (whereby people pay to licence technologies which don't exist yet). The typical lead time for a Java technology is six months (that is, the period between it being announced as available and shipping is usually circa six months). It will be useful to keep this in mind. Most terms to do with Java are trademarks of Sun Microsystems, who controls them in the somewhat vain hope of being able to maintain some degree of standardisation. I shall refer to the JDK, by which I mean Sun's Java Development Kit, the standard Java compiler and runtime environment. 2 Chapter 2 The Java Language As mentioned in the previous section, Java is really a remote execution system, broken into two parts|Java and the Java Virtual Machine. The two are pretty much inseparable. Java has a C/C++-based syntax, inheriting the nomenclature of classes, the private, public, and protected nomenclature, and its concepts of constructors and destructors, from C++. It also borrows heavily from the family of languages that spawned Modula-3|to these it owes garbage collection, threads, exceptions, safety, and much of its inheritance model. Java introduces some new features of its own : Ubiquitous classes |classes and interfaces (which are like classes, but are used to describe speci cations rather than type de nitions) are the only real structure in Java: everything from your window system to an element of your linked list will be a class. Dynamic loading |Java provides for dynamic class (and interface) loading (indeed, it would be di cult to produce a JVM implementation without it). Unicode (2.0) source format |Java source code is written in Unicode 2.0 ([The96])|internationalisation at last ? Labelled breaks {help solve a typical problem with the break construct|you sometimes want to exit more than one loop. We can write, eg: bool k = false; while (a) { while (b) { if (a->head==b->head) { k = true; break; } b=b->tail; 3 }if (k) { break; }; a=a->tail; } Becomes foo: while (a) { while (b) { if (a->head==b->head) { break foo; }b = b->tail; }a=a->tail; } Object-Orientated Synchronisation and exception handling |every object can be locked, waited on and signalled. Every object which is a subclass (in Modula-3 terms, a subtype) of java.lang.Exception may be thrown as an exception. Documentation Comments |There is a special syntax for `documentation comments' (/** ...*/), which may be used to automatically generate documentation from source les. Such tools are fairly primitive at present, and if you look at the automatically generated html documentation for the JDK libraries, you will nd that you need to scoot up and down the object heirarchy several times before very much of it begins to make sense. Widely-used exceptions |Java tends to raise an exception when it encounters a run-time error, rather than aborting your program|so, for example, attempting an out-of-bounds array access throws ArrayIndexOutOfBoundsException rather than aborting your program. It will be useful to note here that Java has complete safety|there are no untraced references, and no way to do pointer arithmetic. Anything unsafe must be done outside Java by another language, the results being communicated back via. the foreign language interface mechanism, native methods, which we will consider later. 2.1 Types Java has a fairly typical type system. As in Modula-3, there are two classes of types|base types and reference types.4 2.1.1 Base types The following categories of base types are de ned: Boolean: bool2 ftrue; falseg 1 Integral { byte 2 f 27 : : :27 1g { short 2 f 215 : : :215 1g { int 2 f 231 : : :231 1g { long 2 f 261 : : :261 1g { char 2 f0 : : :FFFF16g Floating point: IEEE 754 single precision (float), and IEEE 754 double precistion (double) oating point numbers. Note that there is no extended type, so you cannot use IEEE extended precision. You will observe a number of changes from C: No enumerations |the intended methodology is to use class (or interface) variables, eg. static int RED=1;. Hopefully, this will become clearer later. 16-bit char |char has been widened to 16 bits to accomodate Unicode characters. C programmers (and others who assume sizeof(char)==1) beware! No signed or unsigned |this avoids the problems that unsigned types always cause: either LAST(unsigned k) = 2*LAST(k)+1 (C), in which case implicit conversions to signed types can fail, or LAST(unsigned k) = LAST(signed k) (Modula-3) in which case you can never subtract two signed types and put their results in an unsigned variable (try Rect.Horsize(Rect.Full) and watch the pretty value out of range errors abort your program. . . ). 2.1.2 Reference types Reference types subsume classes, interfaces, objects and arrays. The Java equivalent of NIL is null, and the equivalent of ROOT is java.lang.Object. Note that we need no equivalent for ADDRESS, as there are no untraced references in Java, and we need no equivalent for REFANY as there are no records, and it turns out that arrays are also objects2. 1This is the only type that doesn't exist in the JVM|see 3. 2though this is obviously not explicit, since it would introduce parametric polymorphism into the type system. It is, however, possible to introduce parametric polymorphism, as we shall see later in our discussion of Pizza. 5 2.2 Operators and conversion With Java's syntax lifted mostly from C and C++, it is no surprise to nd that it shares many of the same operators for base types: < <= > >= == != && || return a boolean. + * / % ++ -<< >> >>> ~ & | ^ ?: ++ -+= -= *= /= &= |= =̂ %= <<= >>= >>>= instanceof is a binary operator (a instanceof T) which returns a boolean| true if a is of type T, and false otherwise. Conversion is done by typecasting, as in C, using ( and ). .. and + can also be used for strings (\"foo\" + \"bar\"). You will note that the comparison operators now return a boolean, and that Java has standardised (mainly through not having unary *) the behaviour of *=. There is also a new right shift operator, >>>, meaning `arithmetic shift right', some syntactic sugar for concatenating strings, and instanceof and casting replace ISTYPE and NARROW respectively. The `+' syntax for strings is similar to & in Modula-3; note, however, that Java distinguishes between constant strings (of class java.lang.String) and mutable strings (of class java.lang.StringBuffer). \"a\" + \"b\" produces a new String, \"ab\". Integer and oating-point operations with mixed-precision types (eg. int + long or float + double) implicitly convert all their arguments to the `widest' type present, and their results are of the type of their widest operand. Numerical analysts beware. . . There are actually several types of type conversion in Java: Identity conversions |the identity conversion. Assignment conversion |takes place when assigning a variable to the value of an expression. Primitive Widening conversion |widens a value of a base type to another base type with a greater range, and may also convert integer types to oating point types. Primitive Narrowing conversion |the inverse of primitive widening conversion (narrows to a type with a smaller range), and may also convert oating point types to integer types. Widening reference conversion |intuitively, converts an object of a given type to one of one of its supertypes. 6 Narrowing reference conversion |the inverse of widening reference conversion (the reference conversions are like NARROW() in Modula-3). String conversion |there is a conversion from any type to type String. Forbidden Conversions |some conversions are forbidden. Assignment Conversion |occurs during assignment. Method invocation conversion |occurs during method invocation. Casting conversion |occurs when the casting operator is used, eg. (Foo)bar. All of which are described in excruciating detail in x5 of [GS97]. The question of reference type equivalence is a little confused due to the presence of interfaces, but Java basically uses name-equivalence, in contrast with Modula-3's structural equivalence. 2.3 Imperative Constructs Java provides basically the same imperative constructs as C, but there are a few di erences (and surprises): Scoping |Java supports nested scopes (at last!), so { int i=1; if (i==1) { int k; k=4; } }Now works properly3. Indeed, you may even declare variables half way through a scope (though it is considered bad practice to do so): { int i; foo; int k; bar; } 3Java does not support implicit scoping in for...next loops, however, so your loop variables must still be declared in the enclosing scope, or the initialisation clause of the loop. 7 Is equivalent to: { int i; foo; { int k; bar; } }And it",
"title": ""
},
{
"docid": "8c46f24d8e710c5fb4e25be76fc5b060",
"text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.",
"title": ""
},
{
"docid": "1e100608fd78b1e20020f892784199ed",
"text": "In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method [1]. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset.",
"title": ""
},
{
"docid": "c7bbde452a68f84ca9d09c7da2cb29ab",
"text": "Recently, application-specific requirement becomes one of main research challenges in the area of routing for delay tolerant networks. Among various requirements, in this paper, we focus on achieving the desired delivery ratio within bounded given deadline. For this goal, we use analytical model and develop a new forwarding scheme in respective phase. The proposed protocol dynamically adjusts the number of message copies by analytical model and the next hop node is determined depending on the delivery probability and the inter-meeting time of the encountering nodes as well as remaining time. Simulation results demonstrate that our proposed algorithm meets bounded delay with lower overhead than existing protocols in an adaptive way to varying network conditions.",
"title": ""
},
{
"docid": "6edf0db1e517c8786f004fd79f4ef973",
"text": "The alarming increase of resistance against multiple currently available antibiotics is leading to a rapid lose of treatment options against infectious diseases. Since the antibiotic resistance is partially due to a misuse or abuse of the antibiotics, this situation can be reverted when improving their use. One strategy is the optimization of the antimicrobial dosing regimens. In fact, inappropriate drug choice and suboptimal dosing are two major factors that should be considered because they lead to the emergence of drug resistance and consequently, poorer clinical outcomes. Pharmacokinetic/pharmacodynamic (PK/PD) analysis in combination with Monte Carlo simulation allows to optimize dosing regimens of the antibiotic agents in order to conserve their therapeutic value. Therefore, the aim of this review is to explain the basis of the PK/PD analysis and associated techniques, and provide a brief revision of the applications of PK/PD analysis from a therapeutic point-of-view. The establishment and reevaluation of clinical breakpoints is the sticking point in antibiotic therapy as the clinical use of the antibiotics depends on them. Two methodologies are described to establish the PK/PD breakpoints, which are a big part of the clinical breakpoint setting machine. Furthermore, the main subpopulations of patients with altered characteristics that can condition the PK/PD behavior (such as critically ill, elderly, pediatric or obese patients) and therefore, the outcome of the antibiotic therapy, are reviewed. Finally, some recommendations are provided from a PK/PD point of view to enhance the efficacy of prophylaxis protocols used in surgery.",
"title": ""
},
{
"docid": "035cb90504d8bf4bff9c9bac7d8c4306",
"text": "Automated trolleys have been developed to meet the needs of material handling in industries. The velocity of automated trolleys is regulated by an S-shaped (or trapezoid-shaped) acceleration and deceleration profile. In consequence of the velocity profile, the control system of automated trolleys is nonlinear and open-looped. In order to linearize the control system, we use a second order dynamic element to replace the acceleration and declaration curve in practice, and design an optimal controller under the quadratic cost function. Performance of the proposed approach is also compared to the conventional method. The simulation shows a better dynamic performance of the developed control system.",
"title": ""
}
] |
scidocsrr
|
79ef050bffaf659a0ec1b26ba8fcd5b1
|
Discussing the Value of Automatic Hate Speech Detection in Online Debates
|
[
{
"docid": "d5d2e1feeb2d0bf2af49e1d044c9e26a",
"text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053",
"title": ""
},
{
"docid": "79ece5e02742de09b01908668383e8f2",
"text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.",
"title": ""
},
{
"docid": "c8dbc63f90982e05517bbdb98ebaeeb5",
"text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.",
"title": ""
},
{
"docid": "f5a188c87dd38a0a68612352891bcc3f",
"text": "Sentiment analysis of online documents such as news articles, blogs and microblogs has received increasing attention in recent years. In this article, we propose an efficient algorithm and three pruning strategies to automatically build a word-level emotional dictionary for social emotion detection. In the dictionary, each word is associated with the distribution on a series of human emotions. In addition, a method based on topic modeling is proposed to construct a topic-level dictionary, where each topic is correlated with social emotions. Experiment on the real-world data sets has validated the effectiveness and reliability of the methods. Compared with other lexicons, the dictionary generated using our approach is language-independent, fine-grained, and volume-unlimited. The generated dictionary has a wide range of applications, including predicting the emotional distribution of news articles, identifying social emotions on certain entities and news events.",
"title": ""
}
] |
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "7f5815a918c6d04783d68dbc041cc6a0",
"text": "This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large-margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.",
"title": ""
},
{
"docid": "dfdf2581010777e51ff3e29c5b9aee7f",
"text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.",
"title": ""
},
{
"docid": "74287743f75368623da74e716ae8e263",
"text": "Organizations increasingly use social media and especially social networking sites (SNS) to support their marketing agenda, enhance collaboration, and develop new capabilities. However, the success of SNS initiatives is largely dependent on sustainable user participation. In this study, we argue that the continuance intentions of users may be gendersensitive. To theorize and investigate gender differences in the determinants of continuance intentions, this study draws on the expectation-confirmation model, the uses and gratification theory, as well as the self-construal theory and its extensions. Our survey of 488 users shows that while both men and women are motivated by the ability to selfenhance, there are some gender differences. Specifically, while women are mainly driven by relational uses, such as maintaining close ties and getting access to social information on close and distant networks, men base their continuance intentions on their ability to gain information of a general nature. Our research makes several contributions to the discourse in strategic information systems literature concerning the use of social media by individuals and organizations. Theoretically, it expands the understanding of the phenomenon of continuance intentions and specifically the role of the gender differences in its determinants. On a practical level, it delivers insights for SNS providers and marketers into how satisfaction and continuance intentions of male and female SNS users can be differentially promoted. Furthermore, as organizations increasingly rely on corporate social networks to foster collaboration and innovation, our insights deliver initial recommendations on how organizational social media initiatives can be supported with regard to gender-based differences. 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b4b6417ea0e1bc70c5faa50f8e2edf59",
"text": "As secure processing as well as correct recovery of data getting more important, digital forensics gain more value each day. This paper investigates the digital forensics tools available on the market and analyzes each tool based on the database perspective. We present a survey of digital forensics tools that are either focused on data extraction from databases or assist in the process of database recovery. In our work, a detailed list of current database extraction software is provided. We demonstrate examples of database extractions executed on representative selections from among tools provided in the detailed list. We use a standard sample database with each tool for comparison purposes. Based on the execution results obtained, we compare these tools regarding different criteria such as runtime, static or live acquisition, and more.",
"title": ""
},
{
"docid": "cd78dd2ef989917c01a325a460c07223",
"text": "This paper proposes a multi-joint-gripper that achieves envelope grasping for unknown shape objects. Proposed mechanism is based on a chain of Differential Gear Systems (DGS) controlled by only one motor. It also has a Variable Stiffness Mechanism (VSM) that controls joint stiffness to relieve interfering effects suffered from grasping environment and achieve a dexterous grasping. The experiments elucidate that the developed gripper achieves envelop grasping; the posture of the gripper automatically fits the shape of the object with no sensory feedback. And they also show that the VSM effectively works to relieve external interfering. This paper shows the mechanism and experimental results of the second test machine that was developed inheriting the idea of DGS used in the first test machine but has a completely altered VSM.",
"title": ""
},
{
"docid": "7ea3d3002506e0ea6f91f4bdab09c2d5",
"text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.",
"title": ""
},
{
"docid": "56b0876b265437f1f3e6f4fc25592685",
"text": "Currently, progressively larger deep neural networks are trained on ever growing data corpora. As this trend is only going to increase in the future, distributed training schemes are becoming increasingly relevant. A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general. These challenges become even more pressing, as the number of computation nodes increases. To counteract this development we propose sparse binary compression (SBC), a compression framework that allows for a drastic reduction of communication cost for distributed training. SBC combines existing techniques of communication delay and gradient sparsification with a novel binarization method and optimal weight update encoding to push compression gains to new limits. By doing so, our method also allows us to smoothly trade-off gradient sparsity and temporal sparsity to adapt to the requirements of the learning task. Our experiments show, that SBC can reduce the upstream communication on a variety of convolutional and recurrent neural network architectures by more than four orders of magnitude without significantly harming the convergence speed in terms of forward-backward passes. For instance, we can train ResNet50 on ImageNet in the same number of iterations to the baseline accuracy, using ×3531 less bits or train it to a 1% lower accuracy using ×37208 less bits. In the latter case, the total upstream communication required is cut from 125 terabytes to 3.35 gigabytes for every participating client.",
"title": ""
},
{
"docid": "d0f71092df2eab53e7f32eff1cb7af2e",
"text": "Topic modeling of textual corpora is an important and challenging problem. In most previous work, the “bag-of-words” assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.",
"title": ""
},
{
"docid": "076ad699191bd3df87443f427268222a",
"text": "Robotic systems for disease detection in greenhouses are expected to improve disease control, increase yield, and reduce pesticide application. We present a robotic detection system for combined detection of two major threats of greenhouse bell peppers: Powdery mildew (PM) and Tomato spotted wilt virus (TSWV). The system is based on a manipulator, which facilitates reaching multiple detection poses. Several detection algorithms are developed based on principal component analysis (PCA) and the coefficient of variation (CV). Tests ascertain the system can successfully detect the plant and reach the detection pose required for PM (along the side of the plant), yet it has difficulties in reaching the TSWV detection pose (above the plant). Increasing manipulator work-volume is expected to solve this issue. For TSWV, PCA-based classification with leaf vein removal, achieved the highest classification accuracy (90%) while the accuracy of the CV methods was also high (85% and 87%). For PM, PCA-based pixel-level classification was high (95.2%) while leaf condition classification accuracy was low (64.3%) since it was determined based on the upper side of the leaf while disease symptoms start on its lower side. Exposure of the lower side of the leaf during detection is expected to improve PM condition detection.",
"title": ""
},
{
"docid": "02eec4b9078af92a774f6e46b36808f7",
"text": "Cancer cell migration is a plastic and adaptive process integrating cytoskeletal dynamics, cell-extracellular matrix and cell-cell adhesion, as well as tissue remodeling. In response to molecular and physical microenvironmental cues during metastatic dissemination, cancer cells exploit a versatile repertoire of invasion and dissemination strategies, including collective and single-cell migration programs. This diversity generates molecular and physical heterogeneity of migration mechanisms and metastatic routes, and provides a basis for adaptation in response to microenvironmental and therapeutic challenge. We here summarize how cytoskeletal dynamics, protease systems, cell-matrix and cell-cell adhesion pathways control cancer cell invasion programs, and how reciprocal interaction of tumor cells with the microenvironment contributes to plasticity of invasion and dissemination strategies. We discuss the potential and future implications of predicted \"antimigration\" therapies that target cytoskeletal dynamics, adhesion, and protease systems to interfere with metastatic dissemination, and the options for integrating antimigration therapy into the spectrum of targeted molecular therapies.",
"title": ""
},
{
"docid": "832eb4f28b217842e60bfd4820bb6acb",
"text": "It has been recognized that system design will benefit from explicit study of the context in which users work. The unaided individual divorced from a social group and from supporting artifacts is no longer the model user. But with this realization about the importance of context come many difficult questions. What exactly is context? If the individual is no longer central, what is the correct unit of analysis? What are the relations between artifacts, individuals, and the social groups to which they belong? This chapter compares three approaches to the study of context: activity theory, situated action models, and distributed cognition. I consider the basic concepts each approach promulgates and evaluate the usefulness of each for the design of technology. 1",
"title": ""
},
{
"docid": "d799390b673cc28842a310af8cd1eb03",
"text": "This paper focuses on dietary approaches to control intramuscular fat deposition to increase beneficial omega-3 polyunsaturated fatty acids (PUFA) and conjugated linoleic acid content and reduce saturated fatty acids in beef. Beef lipid trans-fatty acids are considered, along with relationships between lipids in beef and colour shelf-life and sensory attributes. Ruminal lipolysis and biohydrogenation limit the ability to improve beef lipids. Feeding omega-3 rich forage increases linolenic acid and long-chain PUFA in beef lipids, an effect increased by ruminally-protecting lipids, but consequently may alter flavour characteristics and shelf-life. Antioxidants, particularly α-tocopherol, stabilise high concentrations of muscle PUFA. Currently, the concentration of long-chain omega-3 PUFA in beef from cattle fed non-ruminally-protected lipids falls below the limit considered by some authorities to be labelled a source of omega-3 PUFA. The mechanisms regulating fatty acid isomer distribution in bovine tissues remain unclear. Further enhancement of beef lipids requires greater understanding of ruminal biohydrogenation.",
"title": ""
},
{
"docid": "6a33013c19dc59d8871e217461d479e9",
"text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.",
"title": ""
},
{
"docid": "36b5440a80238293fbb2db38db04f87d",
"text": "Mobile-app quality is becoming an increasingly important issue. These apps are generally delivered through app stores that let users post reviews. These reviews provide a rich data source you can leverage to understand user-reported issues. Researchers qualitatively studied 6,390 low-rated user reviews for 20 free-to-download iOS apps. They uncovered 12 types of user complaints. The most frequent complaints were functional errors, feature requests, and app crashes. Complaints about privacy and ethical issues and hidden app costs most negatively affected ratings. In 11 percent of the reviews, users attributed their complaints to a recent app update. This study provides insight into the user-reported issues of iOS apps, along with their frequency and impact, which can help developers better prioritize their limited quality assurance resources.",
"title": ""
},
{
"docid": "d9fcfc15c1c310aef6eec96e230074d1",
"text": "There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a “balanced” representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.",
"title": ""
},
{
"docid": "c817e872fa02f93ae967168a5aa15d20",
"text": "We introduce an SIR particle filter for tracking civilian targets including vehicles and pedestrians in dual-band midwave/longwave infrared imagery as well as a novel dual-band track consistency check for triggering appearance model updates. Because of the paucity of available dual-band data, we constructed a custom sensor to acquire the test sequences. The proposed algorithm is robust against magnification changes, aspect changes, and clutter and successfully tracked all 17 cases tested, including two partial occlusions. Future work is needed to comprehensively evaluate performance of the algorithm against state-of-the-art video trackers, especially considering the relatively small number of previous dual-band tracking results that have appeared.",
"title": ""
},
{
"docid": "9f5f79a19d3a181f5041a7b5911db03a",
"text": "BACKGROUND\nNucleoside analogues against herpes simplex virus (HSV) have been shown to suppress shedding of HSV type 2 (HSV-2) on genital mucosal surfaces and may prevent sexual transmission of HSV.\n\n\nMETHODS\nWe followed 1484 immunocompetent, heterosexual, monogamous couples: one with clinically symptomatic genital HSV-2 and one susceptible to HSV-2. The partners with HSV-2 infection were randomly assigned to receive either 500 mg of valacyclovir once daily or placebo for eight months. The susceptible partner was evaluated monthly for clinical signs and symptoms of genital herpes. Source partners were followed for recurrences of genital herpes; 89 were enrolled in a substudy of HSV-2 mucosal shedding. Both partners were counseled on safer sex and were offered condoms at each visit. The predefined primary end point was the reduction in transmission of symptomatic genital herpes.\n\n\nRESULTS\nClinically symptomatic HSV-2 infection developed in 4 of 743 susceptible partners who were given valacyclovir, as compared with 16 of 741 who were given placebo (hazard ratio, 0.25; 95 percent confidence interval, 0.08 to 0.75; P=0.008). Overall, acquisition of HSV-2 was observed in 14 of the susceptible partners who received valacyclovir (1.9 percent), as compared with 27 (3.6 percent) who received placebo (hazard ratio, 0.52; 95 percent confidence interval, 0.27 to 0.99; P=0.04). HSV DNA was detected in samples of genital secretions on 2.9 percent of the days among the HSV-2-infected (source) partners who received valacyclovir, as compared with 10.8 percent of the days among those who received placebo (P<0.001). The mean rates of recurrence were 0.11 per month and 0.40 per month, respectively (P<0.001).\n\n\nCONCLUSIONS\nOnce-daily suppressive therapy with valacyclovir significantly reduces the risk of transmission of genital herpes among heterosexual, HSV-2-discordant couples.",
"title": ""
},
{
"docid": "945f94bd0022e14c1726cb36dd5deefc",
"text": "This paper introduces a mobile human airbag system designed for fall protection for the elderly. A Micro Inertial Measurement Unit ( muIMU) of 56 mm times 23 mm times 15 mm in size is built. This unit consists of three dimensional MEMS accelerometers, gyroscopes, a Bluetooth module and a Micro Controller Unit (MCU). It records human motion information, and, through the analysis of falls using a high-speed camera, a lateral fall can be determined by gyro threshold. A human motion database that includes falls and other normal motions (walking, running, etc.) is set up. Using a support vector machine (SVM) training process, we can classify falls and other normal motions successfully with a SVM filter. Based on the SVM filter, an embedded digital signal processing (DSP) system is developed for real-time fall detection. In addition, a smart mechanical airbag deployment system is finalized. The response time for the mechanical trigger is 0.133 s, which allows enough time for compressed air to be released before a person falls to the ground. The integrated system is tested and the feasibility of the airbag system for real-time fall protection is demonstrated.",
"title": ""
},
{
"docid": "20830c435c95317fbd189341ff5cdebd",
"text": "Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from inthe-loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia that is an order of magnitude larger than comparable datasets. By applying policybased reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.",
"title": ""
}
] |
scidocsrr
|
bbe53e97217ac3ad077acae6c04db5fa
|
Efficient Markov Logic Inference for Natural Language Semantics
|
[
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
}
] |
[
{
"docid": "ef15cf49c90ef4b115b42ee96fa24f93",
"text": "Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both the visual content of images and the textual content of questions. The approaches used to represent the images and questions in a fine-grained manner and questions and to fuse these multimodal features play key roles in performance. Bilinear pooling based models have been shown to outperform traditional linear models for VQA, but their high-dimensional representations and high computational complexity may seriously limit their applicability in practice. For multimodal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches. For fine-grained image and question representation, we develop a ‘co-attention’ mechanism using an end-to-end deep network architecture to jointly learn both the image and question attentions. Combining the proposed MFB approach with co-attention learning in a new network architecture provides a unified model for VQA. Our experimental results demonstrate that the single MFB with co-attention model achieves new state-of-theart performance on the real-world VQA dataset. Code available at https://github.com/yuzcccc/mfb.",
"title": ""
},
{
"docid": "400d7ef5f744b41091221c1aebc46cf0",
"text": "This paper presents the design and analysis of a novel machine family-the enclosed-rotor Halbach-array permanent-magnet brushless dc motors for spacecraft applications. The initial design, selection of major parameters, and air-gap magnetic flux density are estimated using the analytical model of the machine. The proportion of the Halbach array in the machine is optimized using finite element analysis to obtain a near-trapezoidal flux pattern. The machine is found to provide uniform air-gap flux density along the radius, thus avoiding circulating currents in stator conductors and thereby reducing torque ripple. Furthermore, the design is validated with experimental results on a fabricated machine and is found to suit the design requirements of critical spacecraft applications.",
"title": ""
},
{
"docid": "832e1a93428911406759f696eb9cb101",
"text": "Reinforcement learning provides both qualitative and quantitative frameworks for understanding and modeling adaptive decision-making in the face of rewards and punishments. Here we review the latest dispatches from the forefront of this field, and map out some of the territories where lie monsters.",
"title": ""
},
{
"docid": "7588bd6798d8c2fd891acaf3c64c675f",
"text": "OBJECTIVE\nThis article presents a case report of a child with poor sensory processing and describes the disorders impact on the child's occupational behavior and the changes in occupational performance during 10 months of occupational therapy using a sensory integrative approach (OT-SI).\n\n\nMETHOD\nRetrospective chart review of assessment data and analysis of parent interview data are reviewed. Progress toward goals and objectives is measured using goal attainment scaling. Themes from parent interview regarding past and present occupational challenges are presented.\n\n\nRESULTS\nNotable improvements in occupational performance are noted on goal attainment scales, and these are consistent with improvements in behavior. Parent interview data indicate noteworthy progress in the child's ability to participate in home, school, and family activities.\n\n\nCONCLUSION\nThis case report demonstrates a model for OT-SI. The findings support the theoretical underpinnings of sensory integration theory: that improvement in the ability to process and integrate sensory input will influence adaptive behavior and occupational performance. Although these findings cannot be generalized, they provide preliminary evidence supporting the theory and the effectiveness of this approach.",
"title": ""
},
{
"docid": "a39b11d66b368bd48b056612a3e268f7",
"text": "The Unified Modeling Language (UML) is accepted today as an important standard for developing software. UML tools however provide little support for validating and checking models in early development phases. There is also no substantial support for the Object Constraint Language (OCL). We present an approach for the validation of UML models and OCL constraints based on animation and certification. The USE tool (UML-based Specification Environment) supports analysts, designers and developers in executing UML models and checking OCL constraints and thus enables them to employ model-driven techniques for software production.",
"title": ""
},
{
"docid": "235f7fbae50e3952c74cbd67345acb74",
"text": "The paper presents research results in the field of small antennas obtain ed at the Department of Wireless Communications, Faculty of Electrical Engineering and Computing, University of Zagreb. A study comparing the application of several miniaturization techniques on a shorted patch antenn is presented. Single and dual band shorted patch antennas with notches and/or slot are introduced. A PIFA d esigned for application in mobile GSM terminals is described. The application of stacked shorted patches as arr ay elements for a mobile communication base station as well as for electromagnetic field sensor is presented. The design of single and dual band folded monopoles is described. Prototypes of the presented antennas have be en manufactured and their characteristics were verified by measurements.",
"title": ""
},
{
"docid": "fa151d877d387a250caa8d1c1da32a10",
"text": "Recently, unikernels have emerged as an exploration of minimalist software stacks to improve the security of applications in the cloud. In this paper, we propose extending the notion of minimalism beyond an individual virtual machine to include the underlying monitor and the interface it exposes. We propose unikernel monitors . Each unikernel is bundled with a tiny, specialized monitor that only contains what the unikernel needs both in terms of interface and implementation. Unikernel monitors improve isolation through minimal interfaces, reduce complexity, and boot unikernels quickly. Our initial prototype,ukvm, is less than 5% the code size of a traditional monitor, and boots MirageOS unikernels in as little as 10ms (8× faster than a traditional monitor).",
"title": ""
},
{
"docid": "51bc87524f064f715bb5876f21468d9d",
"text": "Cloud computing provides an effective business model for the deployment of IT infrastructure, platform, and software services. Often, facilities are outsourced to cloud providers and this offers the service consumer virtualization technologies without the added cost burden of development. However, virtualization introduces serious threats to service delivery such as Denial of Service (DoS) attacks, Cross-VM Cache Side Channel attacks, Hypervisor Escape and Hyper-jacking. One of the most sophisticated forms of attack is the cross-VM cache side channel attack that exploits shared cache memory between VMs. A cache side channel attack results in side channel data leakage, such as cryptographic keys. Various techniques used by the attackers to launch cache side channel attack are presented, as is a critical analysis of countermeasures against cache side channel attacks.",
"title": ""
},
{
"docid": "1ebf2152d5624261951bebd68c306d5e",
"text": "A dual active bridge (DAB) is a zero-voltage switching (ZVS) high-power isolated dc-dc converter. The development of a 15-kV SiC insulated-gate bipolar transistor switching device has enabled a noncascaded medium voltage (MV) isolated dc-dc DAB converter. It offers simple control compared to a cascaded topology. However, a compact-size high frequency (HF) DAB transformer has significant parasitic capacitances for such voltage. Under high voltage and high dV/dT switching, the parasitics cause electromagnetic interference and switching loss. They also pose additional challenges for ZVS. The device capacitance and slowing of dV/dT play a major role in deadtime selection. Both the deadtime and transformer parasitics affect the ZVS operation of the DAB. Thus, for the MV-DAB design, the switching characteristics of the devices and MV HF transformer parasitics have to be closely coupled. For the ZVS mode, the current vector needs to be between converter voltage vectors with a certain phase angle defined by deadtime, parasitics, and desired converter duty ratio. This paper addresses the practical design challenges for an MV-DAB application.",
"title": ""
},
{
"docid": "664b003cedbca63ebf775bd9f062b8f1",
"text": "Since 1900, soil organic matter (SOM) in farmlands worldwide has declined drastically as a result of carbon turnover and cropping systems. Over the past 17 years, research trials were established to evaluate the efficacy of different commercial humates products on potato production. Data from humic acid (HA) trials showed that different cropping systems responded differently to different products in relation to yield and quality. Important qualifying factors included: source; concentration; processing; chelating or complexing capacity of the humic acid products; functional groups (Carboxyl; Phenol; Hydroxyl; Ketone; Ester; Ether; Amine), rotation and soil quality factors; consistency of the product in enhancing yield and quality of potato crops; mineralization effect; and influence on fertilizer use efficiency. Properties of humic substances, major constituents of soil organic matter, include chelation, mineralization, buffer effect, clay mineral-organic interaction, and cation exchange. Humates increase phosphorus availability by complexing ions into stable compounds, allowing the phosphorus ion to remain exchangeable for plants’ uptake. Collectively, the consistent use of good quality products in our replicated research plots in different years resulted in a yield increase from 11.4% to the maximum of 22.3%. Over the past decade, there has been a major increase in the quality of research and development of organic and humic acid products by some well-established manufacturers. Our experimentations with these commercial products showed an increase in the yield and quality of crops.",
"title": ""
},
{
"docid": "271f3780fe6c1d58a8f5dffbd182e1ac",
"text": "We are presenting the design of a high gain printed antenna array consisting of 420 identical patch antennas intended for FMCW radar at Ku band. The array exhibits 3 dB-beamwidths of 2° and 10° in H and E plane, respectively, side lobe suppression better than 20 dB, gain about 30 dBi and VSWR less than 2 in the frequency range 17.1 - 17.6 GHz. Excellent antenna efficiency that is between 60 and 70 % is achieved by proper impedance matching throughout the array and by using series feeding architecture with both resonant and traveling-wave feed. Enhanced cross polarization suppression is obtained by anti-phase feeding of the upper and the lower halves of the antenna. Overall antenna dimensions are 31 λ0 × 7.5 λ0.",
"title": ""
},
{
"docid": "316d341dd5ea6ebd1d4618b5a1a1b812",
"text": "OBJECTIVE\nBecause of poor overall survival in advanced ovarian malignancies, patients often turn to alternative therapies despite controversy surrounding their use. Currently, the majority of cancer patients combine some form of complementary and alternative medicine with conventional therapies. Of these therapies, antioxidants, added to chemotherapy, are a frequent choice.\n\n\nMETHODS\nFor this preliminary report, two patients with advanced epithelial ovarian cancer were studied. One patient had Stage IIIC papillary serous adenocarcinoma, and the other had Stage IIIC mixed papillary serous and seromucinous adenocarcinoma. Both patients were optimally cytoreduced prior to first-line carboplatinum/paclitaxel chemotherapy. Patient 2 had a delay in initiation of chemotherapy secondary to co-morbid conditions and had evidence for progression of disease prior to institution of therapy. Patient 1 began oral high-dose antioxidant therapy during her first month of therapy. This consisted of oral vitamin C, vitamin E, beta-carotene, coenzyme Q-10 and a multivitamin/mineral complex. In addition to the oral antioxidant therapy, patient 1 added parenteral ascorbic acid at a total dose of 60 grams given twice weekly at the end of her chemotherapy and prior to consolidation paclitaxel chemotherapy. Patient 2 added oral antioxidants just prior to beginning chemotherapy, including vitamin C, beta-carotene, vitamin E, coenzyme Q-10 and a multivitamin/mineral complex. Patient 2 received six cycles of paclitaxel/carboplatinum chemotherapy and refused consolidation chemotherapy despite radiographic evidence of persistent disease. Instead, she elected to add intravenous ascorbic acid at 60 grams twice weekly. Both patients gave written consent for the use of their records in this report.\n\n\nRESULTS\nPatient 1 had normalization of her CA-125 after the first cycle of chemotherapy and has remained normal, almost 3(1/2) years after diagnosis. CT scans of the abdomen and pelvis remain without evidence of recurrence. Patient 2 had normalization of her CA-125 after the first cycle of chemotherapy. After her first round of chemotherapy, the patient was noted to have residual disease in the pelvis. She declined further chemotherapy and added intravenous ascorbic acid. There is no evidence for recurrent disease by physical examination, and her CA-125 has remained normal three years after diagnosis.\n\n\nCONCLUSION\nAntioxidants, when added adjunctively, to first-line chemotherapy, may improve the efficacy of chemotherapy and may prove to be safe. A review of four common antioxidants follows. Because of the positive results found in these two patients, a randomized controlled trial is now underway at the University of Kansas Medical Center evaluating safety and efficacy of antioxidants when added to chemotherapy in newly diagnosed ovarian cancer.",
"title": ""
},
{
"docid": "4a8c8c09fe94cddbc9cadefa014b1165",
"text": "A solution to trajectory-tracking control problem for a four-wheel-steering vehicle (4WS) is proposed using sliding-mode approach. The advantage of this controller over current control procedure is that it is applicable to a large class of vehicles with single or double steering and to a tracking velocity that is not necessarily constant. The sliding-mode approach make the solutions robust with respect to errors and disturbances, as demonstrated by the simulation results.",
"title": ""
},
{
"docid": "75c5a3f0d57a6a39868b28685d92d7b5",
"text": "The complexity of the healthcare system is increasing, and the moral duty to provide quality patient care is threatened by the sky rocketing cost of healthcare. A major concern for both patients and the hospital’s economic bottom line are hospital-acquired infections (HAIs), including central line associated blood stream infections (CLABSIs). These often serious infections result in significantly increased patient morbidity, mortality, length of stay, and use of health care resources. Historically, most infection prevention and control measures have focused on aseptic technique of health care providers and in managing the environment. Emerging evidence for the role of host decontamination in preventing HAIs is shifting the paradigm and paving a new path for novel infection prevention interventions. Chlorhexidine gluconate has a long-standing track record of being a safe and effective product with broad antiseptic activity, and little evidence of emerging resistance. As the attention is directed toward control and prevention of HAIs, chlorhexidine-containing products may prove to be a vital tool in infection control. Increasing rates of multidrug-resistant organisms (MDROs), including methicillinresistant Staphylococcus aureus (MRSA), Acinetobacter baumanniic and vancomycin-resistant Enterococcus (VRE) demand that evidence-based research drive all interventions to prevent transmission of these organisms and the development of HAIs. This review of literature examines current evidence related to daily chlorhexidine gluconate bathing and its impact on CLABSI rates in the adult critically ill patient population.",
"title": ""
},
{
"docid": "1b6e35187b561de95051f67c70025152",
"text": "Ž . The technology acceptance model TAM proposes that ease of use and usefulness predict applications usage. The current research investigated TAM for work-related tasks with the World Wide Web as the application. One hundred and sixty-three subjects responded to an e-mail survey about a Web site they access often in their jobs. The results support TAM. They also Ž . Ž . demonstrate that 1 ease of understanding and ease of finding predict ease of use, and that 2 information quality predicts usefulness for revisited sites. In effect, the investigation applies TAM to help Web researchers, developers, and managers understand antecedents to users’ decisions to revisit sites relevant to their jobs. q 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "3440de9ea0f76ba39949edcb5e2a9b54",
"text": "This document is not intended to create, does not create, and may not be relied upon to create any rights, substantive or procedural, enforceable by law by any party in any matter civil or criminal. Findings and conclusions of the research reported here are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the Much of crime mapping is devoted to detecting high-crime-density areas known as hot spots. Hot spot analysis helps police identify high-crime areas, types of crime being committed, and the best way to respond. This report discusses hot spot analysis techniques and software and identifies when to use each one. The visual display of a crime pattern on a map should be consistent with the type of hot spot and possible police action. For example, when hot spots are at specific addresses, a dot map is more appropriate than an area map, which would be too imprecise. In this report, chapters progress in sophis tication. Chapter 1 is for novices to crime mapping. Chapter 2 is more advanced, and chapter 3 is for highly experienced analysts. The report can be used as a com panion to another crime mapping report ■ Identifying hot spots requires multiple techniques; no single method is suffi cient to analyze all types of crime. ■ Current mapping technologies have sig nificantly improved the ability of crime analysts and researchers to understand crime patterns and victimization. ■ Crime hot spot maps can most effective ly guide police action when production of the maps is guided by crime theories (place, victim, street, or neighborhood).",
"title": ""
},
{
"docid": "4845233571c0572570445f4e3ca4ebc2",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. You may purchase this article from the Ask*IEEE Document Delivery Service at http://www.ieee.org/services/askieee/",
"title": ""
},
{
"docid": "9075e2ae2f1345b91738f3d8ac34cfb2",
"text": "We explore how well the intersection between our own everyday memories and those captured by our smartphones can be used for what we call autobiographical authentication-a challenge-response authentication system that queries users about day-to-day experiences. Through three studies-two on MTurk and one field study-we found that users are good, but make systematic errors at answering autobiographical questions. Using Bayesian modeling to account for these systematic response errors, we derived a formula for computing a confidence rating that the attempting authenticator is the user from a sequence of question-answer responses. We tested our formula against five simulated adversaries based on plausible real-life counterparts. Our simulations indicate that our model of autobiographical authentication generally performs well in assigning high confidence estimates to the user and low confidence estimates to impersonating adversaries.",
"title": ""
},
{
"docid": "840463688f36a5fd14efa8a1a35bfb8e",
"text": "In this paper, we propose a new hybrid ant colony optimization (ACO) algorithm for feature selection (FS), called ACOFS, using a neural network. A key aspect of this algorithm is the selection of a subset of salient features of reduced size. ACOFS uses a hybrid search technique that combines the advantages of wrapper and filter approaches. In order to facilitate such a hybrid search, we designed new sets of rules for pheromone update and heuristic information measurement. On the other hand, the ants are guided in correct directions while constructing graph (subset) paths using a bounded scheme in each and every step in the algorithm. The above combinations ultimately not only provide an effective balance between exploration and exploitation of ants in the search, but also intensify the global search capability of ACO for a highquality solution in FS. We evaluate the performance of ACOFS on eight benchmark classification datasets and one gene expression dataset, which have dimensions varying from 9 to 2000. Extensive experiments were conducted to ascertain how AOCFS works in FS tasks. We also compared the performance of ACOFS with the results obtained from seven existing well-known FS algorithms. The comparison details show that ACOFS has a remarkable ability to generate reduced-size subsets of salient features while yielding significant classification accuracy. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3dfcb00385237c6cb481a5a79a02eb12",
"text": "Genetic variability of DNA repair mechanisms influences chemotherapy treatment outcome of gastric cancer. We conducted a cohort study to investigate the role of ERCC1-ERCC2 gene polymorphisms in the chemotherapy response and clinic outcome of gastric cancer. Between March 2011 and March 2013, 228 gastric patients who were newly diagnosed with histopathology were enrolled in our study. Genotypes of ERCC1 rs11615, rs3212986, rs2298881 and ERCC2 rs3212986 were conducted by polymerase chain reaction restriction fragment length polymorphism (PCR-RFLP) assay. We found that individuals carrying TT genotype of ERCC1 rs11615 and CC genotype of ERCC1 rs2298881 were associated with better response to chemotherapy and longer survival time of gastric cancer. Moreover, individuals with AA genotype of ERCC2 rs1799793 were correlated with shorter survival of gastric cancer. In conclusion, ERCC1 rs11615, rs2298881 and ERCC2 rs1799793 polymorphism play an important role in the treatment outcome of gastric cancer.",
"title": ""
}
] |
scidocsrr
|
fee603c991c0c156680cebf16071485b
|
Classifiers as a model-free group comparison test.
|
[
{
"docid": "410a76670a57db5be2cc5a7a3d10918c",
"text": "Machine learning and pattern recognition algorithms have in the past years developed to become a working horse in brain imaging and the computational neurosciences, as they are instrumental for mining vast amounts of neural data of ever increasing measurement precision and detecting minuscule signals from an overwhelming noise floor. They provide the means to decode and characterize task relevant brain states and to distinguish them from non-informative brain signals. While undoubtedly this machinery has helped to gain novel biological insights, it also holds the danger of potential unintentional abuse. Ideally machine learning techniques should be usable for any non-expert, however, unfortunately they are typically not. Overfitting and other pitfalls may occur and lead to spurious and nonsensical interpretation. The goal of this review is therefore to provide an accessible and clear introduction to the strengths and also the inherent dangers of machine learning usage in the neurosciences.",
"title": ""
}
] |
[
{
"docid": "d87edfb603b5d69bcd0e0dc972d26991",
"text": "The adult nervous system is not static, but instead can change, can be reshaped by experience. Such plasticity has been demonstrated from the most reductive to the most integrated levels, and understanding the bases of this plasticity is a major challenge. It is apparent that stress can alter plasticity in the nervous system, particularly in the limbic system. This paper reviews that subject, concentrating on: a) the ability of severe and/or prolonged stress to impair hippocampal-dependent explicit learning and the plasticity that underlies it; b) the ability of mild and transient stress to facilitate such plasticity; c) the ability of a range of stressors to enhance implicit fear conditioning, and to enhance the amygdaloid plasticity that underlies it.",
"title": ""
},
{
"docid": "ee01fcf12aab8e06c1924d1bb073b16d",
"text": "In this paper, a resampling ensemble algorithm is developed focused on the classification problems for imbalanced datasets. In this method, the small classes are oversampled and large classes are undersampled. The resampling scale is determined by the ratio of the minimum number of class and maximum number of class. Oversampling for “small” classes is done by MWMOTE technique and undersampling for “large” classes is performed according to SSO technique. Our aim is to reduce the time complexity as well as the enhancement of accuracy rate of classification result. Keywords—Imbalanced classification, Resampling algorithm, SMOTE, MWMOTE, SSO. _________________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "1b5b6c4a82436b6dcbf984a199c68b5d",
"text": "Online fashion sales present a challenging use case for personalized recommendation: Stores offer a huge variety of items in multiple sizes. Small stocks, high return rates, seasonality, and changing trends cause continuous turnover of articles for sale on all time scales. Customers tend to shop rarely, but often buy multiple items at once. We report on backtest experiments with sales data of 100k frequent shoppers at Zalando, Europe’s leading online fashion platform. To model changing customer and store environments, our recommendation method employs a pair of neural networks: To overcome the cold start problem, a feedforward network generates article embeddings in “fashion space,” which serve as input to a recurrent neural network that predicts a style vector in this space for each client, based on their past purchase sequence. We compare our results with a static collaborative filtering approach, and a popularity ranking baseline.",
"title": ""
},
{
"docid": "9dd66d538b0195b216c10cc47d3f7005",
"text": "This study presents a stochastic demand multi-product supplier selection model with service level and budget constraints using Genetic Algorithm. Recently, much attention has been given to stochastic demand due to uncertainty in the real world. Conflicting objectives also exist between profit, service level and resource utilization. In this study, the relationship between the expected profit and the number of trials as well as between the expected profit and the combination of mutation and crossover rates are investigated to identify better parameter values to efficiently run the Genetic Algorithm. Pareto optimal solutions and return on investment are analyzed to provide decision makers with the alternative options of achieving the proper budget and service level. The results show that the optimal value for the return on investment and the expected profit are obtained with a certain budget and service level constraint. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "be3fa2fbaaa362aace36d112ff09f94d",
"text": "One of the key objectives in accident data analysis to identify the main factors associated with a road and traffic accident. However, heterogeneous nature of road accident data makes the analysis task difficult. Data segmentation has been used widely to overcome this heterogeneity of the accident data. In this paper, we proposed a framework that used K-modes clustering technique as a preliminary task for segmentation of 11,574 road accidents on road network of Dehradun (India) between 2009 and 2014 (both included). Next, association rule mining are used to identify the various circumstances that are associated with the occurrence of an accident for both the entire data set (EDS) and the clusters identified by K-modes clustering algorithm. The findings of cluster based analysis and entire data set analysis are then compared. The results reveal that the combination of k mode clustering and association rule mining is very inspiring as it produces important information that would remain hidden if no segmentation has been performed prior to generate association rules. Further a trend analysis have also been performed for each clusters and EDS accidents which finds different trends in different cluster whereas a positive trend is shown by EDS. Trend analysis also shows that prior segmentation of accident data is very important before analysis.",
"title": ""
},
{
"docid": "61e7b3c7de15f87ed86ffb355d1b126c",
"text": "Temporal action detection is a very important yet challenging problem, since videos in real applications are usually long, untrimmed and contain multiple action instances. This problem requires not only recognizing action categories but also detecting start time and end time of each action instance. Many state-of-the-art methods adopt the \"detection by classification\" framework: first do proposal, and then classify proposals. The main drawback of this framework is that the boundaries of action instance proposals have been fixed during the classification step. To address this issue, we propose a novel Single Shot Action Detector (SSAD) network based on 1D temporal convolutional layers to skip the proposal generation step via directly detecting action instances in untrimmed video. On pursuit of designing a particular SSAD network that can work effectively for temporal action detection, we empirically search for the best network architecture of SSAD due to lacking existing models that can be directly adopted. Moreover, we investigate into input feature types and fusion strategies to further improve detection accuracy. We conduct extensive experiments on two challenging datasets: THUMOS 2014 and MEXaction2. When setting Intersection-over-Union threshold to 0.5 during evaluation, SSAD significantly outperforms other state-of-the-art systems by increasing mAP from $19.0%$ to $24.6%$ on THUMOS 2014 and from 7.4% to $11.0%$ on MEXaction2.",
"title": ""
},
{
"docid": "36347412c7d30ae6fde3742bbc4f21b9",
"text": "iii",
"title": ""
},
{
"docid": "aa2e16e6ed5d2610a567e358807834d4",
"text": "As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.",
"title": ""
},
{
"docid": "60a3538ec6a64af6f8fd447ed0fb79f5",
"text": "Several Pinned Photodiode (PPD) CMOS Image Sensors (CIS) are designed, manufactured, characterized and exposed biased to ionizing radiation up to 10 kGy(SiO2 ). In addition to the usually reported dark current increase and quantum efficiency drop at short wavelengths, several original radiation effects are shown: an increase of the pinning voltage, a decrease of the buried photodiode full well capacity, a large change in charge transfer efficiency, the creation of a large number of Total Ionizing Dose (TID) induced Dark Current Random Telegraph Signal (DC-RTS) centers active in the photodiode (even when the Transfer Gate (TG) is accumulated) and the complete depletion of the Pre-Metal Dielectric (PMD) interface at the highest TID leading to a large dark current and the loss of control of the TG on the dark current. The proposed mechanisms at the origin of these degradations are discussed. It is also demonstrated that biasing (i.e., operating) the PPD CIS during irradiation does not enhance the degradations compared to sensors grounded during irradiation.",
"title": ""
},
{
"docid": "a0ebe19188abab323122a5effc3c4173",
"text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.",
"title": ""
},
{
"docid": "bc11f3de3037b0098a6c313d879ae696",
"text": "The study of polygon meshes is a large sub-field of computer graphics and geometric modeling. Different representations of polygon meshes are used for different applications and goals. The variety of operations performed on meshes may include boolean logic, smoothing, simplification, and many others. 2.3.1 What is a mesh? A mesh is a collection of polygonal facets targeting to constitute an appropriate approximation of a real 3D object. It possesses three different combinatorial elements: vertices, edges and facets. From another viewpoint, a mesh can also be completely described by two kinds of information. The geometry information gives essentially the positions (coordinates) of all its vertices, while the connectivity information provides the adjacency relations between the different elements. 2.3.2 An example of 3D meshes As we can see in the Fig. 2.3, the facets usually consist of triangles, quadrilaterals or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes. The degree of a facet is the number of its component edges, and the valence of a vertex is defined as the number of its incident edges. 2.3.3 Classification of structures Polygon meshes may be represented in a variety of structures, using different methods to store the vertex, edge and face data. In general they include/",
"title": ""
},
{
"docid": "ed0be5db315ef63c4f96fd21c2ed7110",
"text": "In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 × 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance.",
"title": ""
},
{
"docid": "a96f27e15c3bbc60810b73a5de21a06c",
"text": "Illumination always affects image quality seriously in practice. To weaken illumination effect on image quality, this paper proposes an adaptive gamma correction method. First, a mapping between pixel and gamma values is built. The gamma values are then revised using two non-linear functions to prevent image distortion. Experimental results demonstrate that the proposed method performs better in readjusting image illumination condition and improving image quality.",
"title": ""
},
{
"docid": "bad6560c8c769484a9ce213d0933923e",
"text": "Online support groups have drawn considerable attention from scholars in the past decades. While prior research has explored the interactions and motivations of users, we know relatively little about how culture shapes the way people use and understand online support groups. Drawing on ethnographic research in a Chinese online depression community, we examine how online support groups function in the context of Chinese culture for people with depression. Through online observations and interviews, we uncover the unique interactions among users in this online support group, such as peer diagnosis, peer therapy, and public journaling. These activities were intertwined with Chinese cultural values and the scarcity of mental health resources in China. We also show that online support groups play an important role in fostering individual empowerment and improving public understanding of depression in China. This paper provides insights into the interweaving of culture and online health community use and contributes to a context-rich understanding of online support groups.",
"title": ""
},
{
"docid": "ce2ef27f032d30ce2bc6aa5509a58e49",
"text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.",
"title": ""
},
{
"docid": "addad4069782620549e7a357e2c73436",
"text": "Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts have been made to solve these problems, however, most of the already existing methods are designed for visible light cameras, which are inherently inefficient under low light conditions. In this paper, we present a drivable region detection algorithm designed for thermal-infrared cameras in order to overcome the aforementioned problems. The novelty of the proposed method lies in the utilization of on-line road initialization with a highly scene-adaptive sampling mask. Furthermore, our prior road information extraction is tailored to enforce temporal consistency among a series of images. In this paper, we also propose a large number of experiments in various scenarios (on-road, off-road and cluttered road). A total of about 6000 manually annotated images are made available in our website for the research community. Using this dataset, we compared our method against multiple state-of-the-art approaches including convolutional neural network (CNN) based methods to emphasize the robustness of our approach under challenging situations.",
"title": ""
},
{
"docid": "e38cbee5c03319d15086e9c39f7f8520",
"text": "In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.",
"title": ""
},
{
"docid": "45881ab3fc9b2d09f211808e8c9b0a3c",
"text": "Nowadays a large number of user-adaptive systems has been developed. Commonly, the effort to build user models is repeated across applications and domains, due to the lack of interoperability and synchronization among user-adaptive systems. There is a strong need for the next generation of user models to be interoperable, i.e. to be able to exchange user model portions and to use the information that has been exchanged to enrich the user experience. This paper presents an overview of the well-established literature dealing with user model interoperability, discussing the most representative work which has provided valuable solutions to face interoperability issues. Based on a detailed decomposition and a deep analysis of the selected work, we have isolated a set of dimensions characterizing the user model interoperability process along which the work has been classified. Starting from this analysis, the paper presents some open issues and possible future deployments in the area.",
"title": ""
},
{
"docid": "24e3f865244cd3227db784b0e509edd0",
"text": "The present journal recently stated in the call for a special issue on social sustainability, ―[t]hough sustainable development is said to rest on ̳three pillars‘, one of these—social sustainability—has received significantly less attention than its bio-physical environmental and economic counterparts‖. The current issue promises to engage the concepts of ―development sustainability‖, ―bridge sustainability‖ and ―maintenance sustainability‖ and the tensions between these different aspects of social sustainability. The aim of the present study is to identify the visibility of disabled people in the academic social sustainability literature, to ascertain the impact and promises of social sustainability indicators put forward in the same literature and to engage especially with the concepts of ―development sustainability‖, ―bridge sustainability‖ and ―maintenance sustainability‖ through disability studies and ability studies lenses. We report that disabled people are barely covered in the academic social sustainability literature; of the 5165 academic articles investigated only 26 had content related to disabled people and social sustainability. We also conclude that social sustainability indicators evident in the 1909 academic articles with the phrase ―social sustainability‖ in the abstract mostly focused on products and did not reflect yet the goals outlined in the ―development sustainability‖ aspect of social sustainability proposed by Vallance such as basic needs, building social capital, justice and so on. We posit that if the focus within the social sustainability discourse shifts more toward the social that an active presence of disabled people in this OPEN ACCESS Sustainability 2013, 5 4890 discourse is essential to disabled people. We showcase the utility of an ability studies lens to further the development and application of the ―development sustainability‖, ―bridge sustainability‖ and ―maintenance sustainability‖ concepts. We outline how different ability expectations intrinsic to certain schools of thought of how to deal with human-nature relationships (for example anthropocentric versus bio/ecocentric) impact this relationship and ―bridge sustainability‖. As to ―maintenance development‖, we posit that no engagement has happened yet with the ability expectation conflicts between able-bodied and disabled people, or for that matter with the ability expectation differences between different able-bodied groups within social sustainability discourses; an analysis essential for the maintenance of development. In general, we argue that there is a need to generate ability expectation conflict maps and ability expectations conflict resolution mechanisms for all sustainable development discourses individually and for ability conflicts between sustainable development discourses.",
"title": ""
},
{
"docid": "060501be3e3335530a292a40427cf5cc",
"text": "The more electric aircraft (MEA) has motivated aircraft manufacturers since few decades. Indeed, their investigations lead to the increase of electric power in airplanes. The challenge is to decrease the weight of embedded systems and therefore, the fuel consumption. This is possible thanks to new efficient power electronic converters made of new components. As magnetic components represent a great proportion of their weight, planar components are an interesting solution to increase the power density of some switching mode power supplies. This paper presents the benefits and drawbacks of high-frequency planar transformers in dc/dc converters, different models developed for their design and different issues in MEA context related to planar’s specific geometry and technology.",
"title": ""
}
] |
scidocsrr
|
21a1dea56077f4d18daabf859ea5e91a
|
Kello Depa Depa Warmth and Competence as Universal Dimensions of Social Perception : The Stereotype Content Model and the BIAS Map
|
[
{
"docid": "c36fec7cebe04627ffcd9a689df8c5a2",
"text": "In seems there are two dimensions that underlie most judgments of traits, people, groups, and cultures. Although the definitions vary, the first makes reference to attributes such as competence, agency, and individualism, and the second to warmth, communality, and collectivism. But the relationship between the two dimensions seems unclear. In trait and person judgment, they are often positively related; in group and cultural stereotypes, they are often negatively related. The authors report 4 studies that examine the dynamic relationship between these two dimensions, experimentally manipulating the location of a target of judgment on one and examining the consequences for the other. In general, the authors' data suggest a negative dynamic relationship between the two, moderated by factors the impact of which they explore.",
"title": ""
},
{
"docid": "ae71548900779de3ee364a6027b75a02",
"text": "The authors suggest that the traditional conception of prejudice--as a general attitude or evaluation--can problematically obscure the rich texturing of emotions that people feel toward different groups. Derived from a sociofunctional approach, the authors predicted that groups believed to pose qualitatively distinct threats to in-group resources or processes would evoke qualitatively distinct and functionally relevant emotional reactions. Participants' reactions to a range of social groups provided a data set unique in the scope of emotional reactions and threat beliefs explored. As predicted, different groups elicited different profiles of emotion and threat reactions, and this diversity was often masked by general measures of prejudice and threat. Moreover, threat and emotion profiles were associated with one another in the manner predicted: Specific classes of threat were linked to specific, functionally relevant emotions, and groups similar in the threat profiles they elicited were also similar in the emotion profiles they elicited.",
"title": ""
},
{
"docid": "713010fe0ee95840e6001410f8a164cc",
"text": "Three studies tested the idea that when social identity is salient, group-based appraisals elicit specific emotions and action tendencies toward out-groups. Participants' group memberships were made salient and the collective support apparently enjoyed by the in-group was measured or manipulated. The authors then measured anger and fear (Studies 1 and 2) and anger and contempt (Study 3), as well as the desire to move against or away from the out-group. Intergroup anger was distinct from intergroup fear, and the inclination to act against the out-group was distinct from the tendency to move away from it. Participants who perceived the in-group as strong were more likely to experience anger toward the out-group and to desire to take action against it. The effects of perceived in-group strength on offensive action tendencies were mediated by anger.",
"title": ""
}
] |
[
{
"docid": "7daf5ad71bda51eacc68f0a1482c3e7e",
"text": "Nearly every modern mobile device includes two cameras. With advances in technology the resolution of these sensors has constantly increased. While this development provides great convenience for users, for example with video-telephony or as dedicated camera replacement, the security implications of including high resolution cameras on such devices has yet to be considered in greater detail. With this paper we demonstrate that an attacker may abuse the cameras in modern smartphones to extract valuable information from a victim. First, we consider exploiting a front-facing camera to capture a user’s keystrokes. By observing facial reflections, it is possible to capture user input with the camera. Subsequently, individual keystrokes can be extracted from the images acquired with the camera. Furthermore, we demonstrate that these cameras can be used by an attacker to extract and forge the fingerprints of a victim. This enables an attacker to perform a wide range of malicious actions, including authentication bypass on modern biometric systems and falsely implicating a person by planting fingerprints in a crime scene. Finally, we introduce several mitigation strategies for the identified threats.",
"title": ""
},
{
"docid": "11f2adab1fb7a93e0c9009a702389af1",
"text": "OBJECTIVE\nThe authors present clinical outcome data and satisfaction of patients who underwent minimally invasive vertebral body corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach and posterior short-segment instrumentation for lumbar burst fractures.\n\n\nMETHODS\nPatients with unstable lumbar burst fractures who underwent corpectomy and anterior column reconstruction via a mini-open, extreme lateral, transpsoas approach with short-segment posterior fixation were reviewed retrospectively. Demographic information, operative parameters, perioperative radiographic measurements, and complications were analyzed. Patient-reported outcome instruments (Oswestry Disability Index [ODI], 12-Item Short Form Health Survey [SF-12]) and an anterior scar-specific patient satisfaction questionnaire were recorded at the latest follow-up.\n\n\nRESULTS\nTwelve patients (7 men, 5 women, average age 42 years, range 22-68 years) met the inclusion criteria. Lumbar corpectomies with anterior column support were performed (L-1, n = 8; L-2, n = 2; L-3, n = 2) and supplemented with short-segment posterior instrumentation (4 open, 8 percutaneous). Four patients had preoperative neurological deficits, all of which improved after surgery. No new neurological complications were noted. The anterior incision on average was 6.4 cm (range 5-8 cm) in length, caused mild pain and disability, and was aesthetically acceptable to the large majority of patients. Three patients required chest tube placement for pleural violation, and 1 patient required reoperation for cage subsidence/hardware failure. Average clinical follow-up was 38 months (range 16-68 months), and average radiographic follow-up was 37 months (range 6-68 months). Preoperative lumbar lordosis and focal lordosis were significantly improved/maintained after surgery. Patients were satisfied with their outcomes, had minimal/moderate disability (average ODI score 20, range 0-52), and had good physical (SF-12 physical component score 41.7% ± 10.4%) and mental health outcomes (SF-12 mental component score 50.2% ± 11.6%) after surgery.\n\n\nCONCLUSIONS\nAnterior corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach supplemented by short-segment posterior instrumentation is a safe, effective alternative to conventional approaches in the treatment of single-level unstable burst fractures and is associated with excellent functional outcomes and patient satisfaction.",
"title": ""
},
{
"docid": "b5fea029d64084089de8e17ae9debffc",
"text": "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.",
"title": ""
},
{
"docid": "821c3c62ad0f36fc95692e4bc9db8953",
"text": "Skin metastases occur in 0.6%-10.4% of all patients with cancer and represent 2% of all skin tumors. Skin metastases from visceral malignancies are important for dermatologists and dermatopathologists because of their variable clinical appearance and presentation, frequent delay and failure in their diagnosis, relative proportion of different internal malignancies metastasizing to the skin, and impact on morbidity, prognosis, and treatment. Another factor to take into account is that cutaneous metastasis may be the first sign of clinically silent visceral cancer. The relative frequencies of metastatic skin disease tend to correlate with the frequency of the different types of primary cancer in each sex. Thus, women with skin metastases have the following distribution in decreasing order of frequency of primary malignancies: breast, ovary, oral cavity, lung, and large intestine. In men, the distribution is as follows: lung, large intestine, oral cavity, kidney, breast, esophagus, pancreas, stomach, and liver. A wide morphologic spectrum of clinical appearances has been described in cutaneous metastases. This variable clinical morphology included nodules, papules, plaques, tumors, and ulcers. From a histopathologic point of view, there are 4 main morphologic patterns of cutaneous metastases involving the dermis, namely, nodular, infiltrative, diffuse, and intravascular. Generally, cutaneous metastases herald a poor prognosis. The average survival time of patients with skin metastases is a few months. In this article, we review the clinicopathologic and immunohistochemical characteristics of cutaneous metastases from internal malignancies, classify the most common cutaneous metastases, and identify studies that may assist in diagnosing the origin of a cutaneous metastasis.",
"title": ""
},
{
"docid": "bd077cbf7785fc84e98724558832aaf6",
"text": "Two process tracing techniques, explicit information search and verbal protocols, were used to examine the information processing strategies subjects use in reaching a decision. Subjects indicated preferences among apartments. The number of alternatives available and number of dimensions of information available was varied across sets of apartments. When faced with a two alternative situation, the subjects employed search strategies consistent with a compensatory decision process. In contrast, when faced with a more complex (multialternative) decision task, the subjects employed decision strategies designed to eliminate some of the available alternatives as quickly as possible and on the basis of a limited amount of information search and evaluation. The results demonstrate that the information processing leading to choice will vary as a function of task complexity. An integration of research in decision behavior with the methodology and theory of more established areas of cognitive psychology, such as human problem solving, is advocated.",
"title": ""
},
{
"docid": "edeb56280e9645133b8ffbf40bcd9287",
"text": "The design, architecture and VLSI implementation of an image compression algorithm for high-frame rate, multi-view wireless endoscopy is presented. By operating directly on Bayer color filter array image the algorithm achieves both high overall energy efficiency and low implementation cost. It uses two-dimensional discrete cosine transform to decorrelate image values in each $$4\\times 4$$ 4 × 4 block. Resulting coefficients are encoded by a new low-complexity yet efficient entropy encoder. An adaptive deblocking filter on the decoder side removes blocking effects and tiling artifacts on very flat image, which enhance the final image quality. The proposed compressor, including a 4 KB FIFO, a parallel to serial converter and a forward error correction encoder, is implemented in 180 nm CMOS process. It consumes 1.32 mW at 50 frames per second (fps) and only 0.68 mW at 25 fps at 3 MHz clock. Low silicon area 1.1 mm $$\\times$$ × 1.1 mm, high energy efficiency (27 $$\\upmu$$ μ J/frame) and throughput offer excellent scalability to handle image processing tasks in new, emerging, multi-view, robotic capsules.",
"title": ""
},
{
"docid": "1a063741d53147eb6060a123bff96c27",
"text": "OBJECTIVE\nThe assessment of cognitive functions of adults with attention deficit hyperactivity disorder (ADHD) comprises self-ratings of cognitive functioning (subjective assessment) as well as psychometric testing (objective neuropsychological assessment). The aim of the present study was to explore the utility of these assessment strategies in predicting neuropsychological impairments of adults with ADHD as determined by both approaches.\n\n\nMETHOD\nFifty-five adults with ADHD and 66 healthy participants were assessed with regard to cognitive functioning in several domains by employing subjective and objective measurement tools. Significance and effect sizes for differences between groups as well as the proportion of patients with impairments were analyzed. Furthermore, logistic regression analyses were carried out in order to explore the validity of subjective and objective cognitive measures in predicting cognitive impairments.\n\n\nRESULTS\nBoth subjective and objective assessment tools revealed significant cognitive dysfunctions in adults with ADHD. The majority of patients displayed considerable impairments in all cognitive domains assessed. A comparison of effect sizes, however, showed larger dysfunctions in the subjective assessment than in the objective assessment. Furthermore, logistic regression models indicated that subjective cognitive complaints could not be predicted by objective measures of cognition and vice versa.\n\n\nCONCLUSIONS\nSubjective and objective assessment tools were found to be sensitive in revealing cognitive dysfunctions of adults with ADHD. Because of the weak association between subjective and objective measurements, it was concluded that subjective and objective measurements are both important for clinical practice but may provide distinct types of information and capture different aspects of functioning.",
"title": ""
},
{
"docid": "03e267aeeef5c59aab348775d264afce",
"text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate ≈ object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu’s multi-modal model with language priors [27].",
"title": ""
},
{
"docid": "052a83669b39822eda51f2e7222074b4",
"text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.",
"title": ""
},
{
"docid": "f2fdd2f5a945d48c323ae6eb3311d1d0",
"text": "Distributed computing systems such as clouds continue to evolve to support various types of scientific applications, especially scientific workflows, with dependable, consistent, pervasive, and inexpensive access to geographically-distributed computational capabilities. Scheduling multiple workflows on distributed computing systems like Infrastructure-as-a-Service (IaaS) clouds is well recognized as a fundamental NP-complete problem that is critical to meeting various types of Quality-of-Service (QoS) requirements. In this paper, we propose a multiobjective optimization workflow scheduling approach based on dynamic game-theoretic model aiming at reducing workflow make-spans, reducing total cost, and maximizing system fairness in terms of workload distribution among heterogeneous cloud virtual machines (VMs). We conduct extensive case studies as well based on various well-known scientific workflow templates and real-world third-party commercial IaaS clouds. Experimental results clearly suggest that our proposed approach outperform traditional ones by achieving lower workflow make-spans, lower cost, and better system fairness.",
"title": ""
},
{
"docid": "8510bcbee74c99c39a5220d54ebf4d97",
"text": "We propose a novel algorithm to detect visual saliency from video signals by combining both spatial and temporal information and statistical uncertainty measures. The main novelty of the proposed method is twofold. First, separate spatial and temporal saliency maps are generated, where the computation of temporal saliency incorporates a recent psychological study of human visual speed perception. Second, the spatial and temporal saliency maps are merged into one using a spatiotemporally adaptive entropy-based uncertainty weighting approach. The spatial uncertainty weighing incorporates the characteristics of proximity and continuity of spatial saliency, while the temporal uncertainty weighting takes into account the variations of background motion and local contrast. Experimental results show that the proposed spatiotemporal uncertainty weighting algorithm significantly outperforms state-of-the-art video saliency detection models.",
"title": ""
},
{
"docid": "59bfb330b9ca7460280fecca78383857",
"text": "Big data poses many facets and challenges when analyzing data, often described with the five big V’s of Volume, Variety, Velocity, Veracity, and Value. However, the most important V – Value can only be achieved when knowledge can be derived from the data. The volume of nowadays datasets make a manual investigation of all data records impossible and automated analysis techniques from data mining or machine learning often cannot be applied in a fully automated fashion to solve many real world analysis problems, and hence, need to be manually trained or adapted. Visual analytics aims to solve this problem with a “human-in-the-loop” approach that provides the analyst with a visual interface that tightly integrates automated analysis techniques with human interaction. However, a holistic understanding of these analytic processes is currently an under-explored research area. A major contribution of this dissertation is a conceptual model-driven approach to visual analytics that focuses on the human-machine interplay during knowledge generation. At its core, it presents the knowledge generation model which is subsequently specialized for human analytic behavior, visual interactive machine learning, and dimensionality reduction. These conceptual processes extend and combine existing conceptual works that aim to establish a theoretical foundation for visual analytics. In addition, this dissertation contributes novel methods to investigate and support human knowledge generation processes, such as semi-automation and recommendation, analytic behavior and trust building, or visual interaction with machine learning. These methods are investigated in close collaboration with real experts from different application domains (such as soccer analysis, linguistic intonation research, and criminal intelligence analysis) and hence, different data characteristics (geospatial movement, time series, and high-dimensional). The results demonstrate that this conceptual approach leads to novel, more tightly integrated, methods that support the analyst in knowledge generation. In a final broader discussion, this dissertation reflects the conceptual and methodological contributions and enumerates research areas at the intersection of data mining, machine learning, visualization, and human-computer interaction research, with the ultimate goal to make big data exploration more effective, efficient, and transparent.",
"title": ""
},
{
"docid": "2dd9bb2536fdc5e040544d09fe3dd4fa",
"text": "Low 1/f noise, low-dropout (LDO) regulators are becoming critical for the supply regulation of deep-submicron analog baseband and RF system-on-chip designs. A low-noise, high accuracy LDO regulator (LN-LDO) utilizing a chopper stabilized error amplifier is presented. In order to achieve fast response during load transients, a current-mode feedback amplifier (CFA) is designed as a second stage driving the regulation FET. In order to reduce clock feed-through and 1/f noise accumulation at the chopping frequency, a first-order digital SigmaDelta noise-shaper is used for chopping clock spectral spreading. With up to 1 MHz noise-shaped modulation clock, the LN-LDO achieves a noise spectral density of 32 nV/radic(Hz) and a PSR of 38 dB at 100 kHz. The proposed LDO is shown to reduce the phase noise of an integrated 32 MHz temperature compensated crystal oscillator (TCXO) at 10 kHz offset by 15 dB. Due to reduced 1/f noise requirements, the error amplifier silicon area is reduced by 75%, and the overall regulator area is reduced by 50% with respect to an equivalent noise static regulator. The current-mode feedback second stage buffer reduces regulator settling time by 60% in comparison to an equivalent power consumption voltage mode buffer, achieving 0.6 mus settling time for a 25-mA load step. The LN-LDO is designed and fabricated on a 0.25 mum CMOS process with five layers of metal, occupying 0.88 mm2.",
"title": ""
},
{
"docid": "97b212bb8fde4859e368941a4e84ba90",
"text": "What appears to be a simple pattern of results—distributed-study opportunities usually produce bettermemory thanmassed-study opportunities—turns out to be quite complicated.Many ‘‘impostor’’ effects such as rehearsal borrowing, strategy changes during study, recency effects, and item skipping complicate the interpretation of spacing experiments. We suggest some best practices for future experiments that diverge from the typical spacing experiments in the literature. Next, we outline themajor theories that have been advanced to account for spacing studies while highlighting the critical experimental evidence that a theory of spacingmust explain. We then propose a tentative verbal theory based on the SAM/REMmodel that utilizes contextual variability and study-phase retrieval to explain the major findings, as well as predict some novel results. Next, we outline the major phenomena supporting testing as superior to restudy on long-term retention tests, and review theories of the testing phenomenon, along with some possible boundary conditions. Finally, we suggest some ways that spacing and testing can be integrated into the classroom, and ask to what extent educators already capitalize on these phenomena. Along the way, we present several new experiments that shed light on various facets of the spacing and testing effects.",
"title": ""
},
{
"docid": "26c003f70bbaade54b84dcb48d2a08c9",
"text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.",
"title": ""
},
{
"docid": "175733c4f95af7f68847acd393cb2a1d",
"text": "This study presents an asymmetric broadside coupled balun with low-loss broadband characteristics for mixer designs. The correlation between balun impedance and a 3D multilayer CMOS structure are discussed and analyzed. Two asymmetric multilayer meander coupled lines are adopted to implement the baluns. Three balanced mixers that comprise three miniature asymmetric broadside coupled Marchand baluns are implemented to demonstrate the applicability to MOS technology. Both a single and dual balun occupy an area of only 0.06 mm2. The balun achieves a measured bandwidth of over 120%, an insertion loss of better than 4.1 dB (3 dB for an ideal balun) at the center frequency, an amplitude imbalance of less than 1 dB, and a phase imbalance of less than 5deg from 10 to 60 GHz. The first demonstrated circuit is a Ku-band mixer, which is implemented with a miniaturized balun to reduce the chip area by 80%. This 17-GHz mixer yields a conversion loss of better than 6.8 dB with a chip size of 0.24 mm2. The second circuit is a 15-60-GHz broadband single-balanced mixer, which achieves a conversion loss of better than 15 dB and occupies a chip area of 0.24 mm2. A three-conductor miniaturized dual balun is then developed for use in the third mixer. This star mixer incorporates two miniature dual baluns to achieve a conversion loss of better than 15 dB from 27 to 54 GHz, and occupies a chip area of 0.34 mm2.",
"title": ""
},
{
"docid": "46632965f75d0b07c8f35db944277ab1",
"text": "The aim of this cross-sectional study was to assess the complications associated with tooth supported fixed dental prosthesis amongst patients reporting at University College of Dentistry Lahore, Pakistan. An interview based questionnaire was used on 112 patients followed by clinical oral examination by two calibrated dentists. Approximately 95% participants were using porcelain fused to metal prosthesis with 60% of prosthesis being used in posterior segments of mouth. Complications like dental caries, coronal abutment fracture, radicular abutment fracture, occlusal interferences, root canal failures and decementations were more significantly associated with crowns than bridges (p=0.000). On the other hand esthetic issues, periapical lesions, periodontal problems, porcelain fractures and metal damage were more commonly associated with bridges (p=0.000). All cases of dental caries reported were associated with acrylic crown and bridges, whereas all coronal abutment fractures were associated with metal prosthesis (p=0.000). A significantly higher number of participants who got their fixed dental prosthesis from other sources i.e. Paramedics, technicians, dental assistants or unqualified dentists had periapical lesions, decementations, esthetic issues and periodontal diseases. This association was found to be statistically significant (p=0.000). Complications associated with fixed dental prosthesis like root canal failures, decementations, periapical lesions and periodontal disease were more significantly associated with prosthesis fabricated by other sources over the period of 5 to 10 years.",
"title": ""
},
{
"docid": "af271bf4b478d6b46d53d9df716d75ee",
"text": "The mobile technology is an ever evolving concept. The world has seen various generations of mobile technology be it 1G, 2G, 3G or 4G. The fifth generation of mobile technology i.e. 5G is seen as a futuristic notion that would help in solving the issues that are pertaining in the 4G. In this paper we have discussed various security issues of 4G with respect to Wi-max and long term evolution. These issues are discussed at MAC and physical layer level. The security issues are seen in terms of possible attacks, system vulnerabilities and privacy concerns. We have also highlighted how the notions of 5G can be tailored to provide a more secure mobile computing environment. We have considered the futuristic architectural framework for 5G networks in our discussion. The basic concepts and features of the fifth generation technology are explained here. We have also analyzed five pillars of strength for the 5G network security which would work in collaboration with each other to provide a secure mobile computing environment to the user.",
"title": ""
},
{
"docid": "c7d2419eaec21acce9b9dbb3040ed647",
"text": "Current text classification systems typically use term stems for representing document content. Ontologies allow the usage of features on a higher semantic level than single words for text classification purposes. In this paper we propose such an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting, a successful machine learning technique is used for classification. Comparative experimental evaluations in three different settings support our approach through consistent improvement of the results. An analysis of the results shows that this improvement is due to two separate effects.",
"title": ""
},
{
"docid": "873689a68ce8b52d6df381081088d48e",
"text": "Natural Language Engineering encourages papers reporting research with a clear potential for practical application. Theoretical papers that consider techniques in sufficient detail to provide for practical implementation are also welcomed, as are shorter reports of on-going research, conference reports, comparative discussions of NLE products, and policy-oriented papers examining e.g. funding programmes or market opportunities. All contributions are peer reviewed and the review process is specifically designed to be fast, contributing to the rapid publication of accepted papers.",
"title": ""
}
] |
scidocsrr
|
d88ecaa64bc7fd5c262e305d1953f7f4
|
Short Paper: Service-Oriented Sharding for Blockchains
|
[
{
"docid": "6c09932a4747c7e2d15b06720b1c48d9",
"text": "A distributed ledger made up of mutually distrusting nodes would allow for a single global database that records the state of deals and obligations between institutions and people. This would eliminate much of the manual, time consuming effort currently required to keep disparate ledgers synchronised with each other. It would also allow for greater levels of code sharing than presently used in the financial industry, reducing the cost of financial services for everyone. We present Corda, a platform which is designed to achieve these goals. This paper provides a high level introduction intended for the general reader. A forthcoming technical white paper elaborates on the design and fundamental architectural decisions.",
"title": ""
}
] |
[
{
"docid": "49e1d016e1aae07d5e3ae1ad0e96e662",
"text": "Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with \"full-fledged\" security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and ommunication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources.\n We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about $N^4$ queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way.\n Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.",
"title": ""
},
{
"docid": "c6338205328828778a2036829f0bbb6c",
"text": "In this study, the theory of technology analysis and decomposition of the 3-D (three dimensional) visualization of GIS (Geographic Information System) are analyzed, it divides the 3-D visualization of GIS into virtual reality technology, and it presents situation and development trend of 3-D visualization of GIS. It studies the urban model of 3-D data acquisition and processing, the classification of urban 3-D space information data and summarization of the characteristics of urban 3-D spatial data are made, and the three dimensional terrain data, building plane and building elevation data access, building surface texture are also analyzed. The high resolution satellite remote sensing data processing technology and aviation remote sensing data processing technology is studied, and the data acquisition and processing technology of airborne 3-D imager also are introduced This paper has solved the visualization of 3-D GIS data model and visual problem in the construction of the 3-D terrain and expression of choice of buildings, and it is to find suitable modeling route, and in order to provides a reference basis in realization of 3-D visualization of GIS. Visualization of 3-D model of the theory and method are studied in the urban construction, according to the 3D visualization in GIS and it proposed the two kinds of 3-D visualization model of GIS technology.",
"title": ""
},
{
"docid": "7d84e574d2a6349a9fc2669fdbe08bba",
"text": "Domain-specific languages (DSLs) provide high-level and domain-specific abstractions that allow expressive and concise algorithm descriptions. Since the description in a DSL hides also the properties of the target hardware, DSLs are a promising path to target different parallel and heterogeneous hardware from the same algorithm description. In theory, the DSL description can capture all characteristics of the algorithm that are required to generate highly efficient parallel implementations. However, most frameworks do not make use of this knowledge and the performance cannot reach that of optimized library implementations. In this article, we present the HIPAcc framework, a DSL and source-to-source compiler for image processing. We show that domain knowledge can be captured in the language and that this knowledge enables us to generate tailored implementations for a given target architecture. Back ends for CUDA, OpenCL, and Renderscript allow us to target discrete graphics processing units (GPUs) as well as mobile, embedded GPUs. Exploiting the captured domain knowledge, we can generate specialized algorithm variants that reach the maximal achievable performance due to the peak memory bandwidth. These implementations outperform state-of-the-art domain-specific languages and libraries significantly.",
"title": ""
},
{
"docid": "dde4e45fd477808d40b3b06599d361ff",
"text": "In this paper, we present the basic features of the flight control of the SkySails towing kite system. After introducing the coordinate definitions and the basic system dynamics, we introduce a novel model used for controller design and justify its main dynamics with results from system identification based on numerous sea trials. We then present the controller design, which we successfully use for operational flights for several years. Finally, we explain the generation of dynamical flight patterns.",
"title": ""
},
{
"docid": "48a45f03f31d8fc0daede6603f3b693a",
"text": "This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.",
"title": ""
},
{
"docid": "cf702356b3a8895f5a636cc05597b52a",
"text": "This paper investigates non-fragile exponential <inline-formula> <tex-math notation=\"LaTeX\">$ {H_\\infty }$ </tex-math></inline-formula> control problems for a class of uncertain nonlinear networked control systems (NCSs) with randomly occurring information, such as the controller gain fluctuation and the uncertain nonlinearity, and short time-varying delay via output feedback controller. Using the nominal point technique, the NCS is converted into a novel time-varying discrete time model with norm-bounded uncertain parameters for reducing the conservativeness. Based on linear matrix inequality framework and output feedback control strategy, design methods for general and optimal non-fragile exponential <inline-formula> <tex-math notation=\"LaTeX\">$ {H_\\infty }$ </tex-math></inline-formula> controllers are presented. Meanwhile, these control laws can still be applied to linear NCSs and general fragile control NCSs while introducing random variables. Finally, three examples verify the correctness of the presented scheme.",
"title": ""
},
{
"docid": "db6904a5aa2196dedf37b279e04b3ea8",
"text": "The use of animation and multimedia for learning is now further extended by the provision of entire Virtual Reality Learning Environments (VRLE). This highlights a shift in Web-based learning from a conventional multimedia to a more immersive, interactive, intuitive and exciting VR learning environment. VRLEs simulate the real world through the application of 3D models that initiates interaction, immersion and trigger the imagination of the learner. The question of good pedagogy and use of technology innovations comes into focus once again. Educators attempt to find theoretical guidelines or instructional principles that could assist them in developing and applying a novel VR learning environment intelligently. This paper introduces the educational use of Web-based 3D technologies and highlights in particular VR features. It then identifies constructivist learning as the pedagogical engine driving the construction of VRLE and discusses five constructivist learning approaches. Furthermore, the authors provide two case studies to investigate VRLEs for learning purposes. The authors conclude with formulating some guidelines for the effective use of VRLEs, including discussion of the limitations and implications for the future study of VRLEs. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9915a09a87126626633088cf4d6b9633",
"text": "This paper introduces ICET, a new algorithm for cost-sensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for cost-sensitive classification — EG2, CS-ID3, and IDX — and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five realworld medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICET’s search in bias space and discovers a way to improve the search.",
"title": ""
},
{
"docid": "08585ddb6bfad07ce04cf85bf28f30ba",
"text": "Users of search engines interact with the system using different size and type of queries. Current search engines perform well with keyword queries but are not for verbose queries which are too long, detailed, or are expressed in more words than are needed. The detection of verbose queries may help search engines to get pertinent results. To accomplish this goal it is important to make some appropriate preprocessing techniques in order to improve classifiers effectiveness. In this paper, we propose to use BabelNet as knowledge base in the preprocessing step and then make a comparative study between different algorithms to classify queries into two classes, verbose or succinct. Our Experimental results are conducted using the TREC Robust Track as data set and different classifiers such as, decision trees probabilistic methods, rule-based methods, instance-based methods, SVM and neural networks.",
"title": ""
},
{
"docid": "0d40f7ddda91227fab3cc62a4ca2847c",
"text": "Coherent texts are not just simple sequences of clauses and sentences, but rather complex artifacts that have highly elaborate rhetorical structure. This paper explores the extent to which well-formed rhetorical structures can be automatically derived by means of surface-form-based algorithms. These algorithms identify discourse usages of cue phrases and break sentences into clauses, hypothesize rhetorical relations that hold among textual units, and produce valid rhetorical structure trees for unrestricted natural language texts. The algorithms are empirically grounded in a corpus analysis of cue phrases and rely on a first-order formalization of rhetorical structure trees. The algorithms are evaluated both intrinsically and extrinsically. The intrinsic evaluation assesses the resemblance between automatically and manually constructed rhetorical structure trees. The extrinsic evaluation shows that automatically derived rhetorical structures can be successfully exploited in the context of text summarization.",
"title": ""
},
{
"docid": "5455e7d53e6de4cbe97cbcdf6eea9806",
"text": "OBJECTIVE\nTo evaluate the clinical and radiological results in the surgical treatment of moderate and severe hallux valgus by performing percutaneous double osteotomy.\n\n\nMATERIAL AND METHOD\nA retrospective study was conducted on 45 feet of 42 patients diagnosed with moderate-severe hallux valgus, operated on in a single centre and by the same surgeon from May 2009 to March 2013. Two patients were lost to follow-up. Clinical and radiological results were recorded.\n\n\nRESULTS\nAn improvement from 48.14 ± 4.79 points to 91.28 ± 8.73 points was registered using the American Orthopedic Foot and Ankle Society (AOFAS) scale. A radiological decrease from 16.88 ± 2.01 to 8.18 ± 3.23 was observed in the intermetatarsal angle, and from 40.02 ± 6.50 to 10.51 ± 6.55 in hallux valgus angle. There was one case of hallux varus, one case of non-union, a regional pain syndrome type I, an infection that resolved with antibiotics, and a case of loosening of the osteosynthesis that required an open surgical refixation.\n\n\nDISCUSSION\nPercutaneous distal osteotomy of the first metatarsal when performed as an isolated procedure, show limitations when dealing with cases of moderate and severe hallux valgus. The described technique adds the advantages of minimally invasive surgery by expanding applications to severe deformities.\n\n\nCONCLUSIONS\nPercutaneous double osteotomy is a reproducible technique for correcting severe deformities, with good clinical and radiological results with a complication rate similar to other techniques with the advantages of shorter surgical times and less soft tissue damage.",
"title": ""
},
{
"docid": "53049f1514bc03368b8c2a0b18518100",
"text": "The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource.",
"title": ""
},
{
"docid": "8de1acc08d32f8840de8375078f2369a",
"text": "Widespread acceptance of virtual reality has been partially handicapped by the inability of current systems to accommodate multiple viewpoints, thereby limiting their appeal for collaborative applications. We are exploring the ability to utilize passive, untracked participants in a powerwall environment. These participants see the same image as the active, immersive participant. This does present the passive user with a varying viewpoint that does not correspond to their current position. We demonstrate the impact this will have on the perceived image and show that human psychology is actually well adapted to compensating for what, on the surface, would seem to be a very drastic distortion. We present some initial guidelines for system design that minimize the negative impact of passive participation, allowing two or more collaborative participants. We then outline future experimentation to measure user compensation for these distorted viewpoints.",
"title": ""
},
{
"docid": "03f2ba940cdde68e848d91bacbbb5f68",
"text": "The glomerular basement membrane (GBM) is the central, non-cellular layer of the glomerular filtration barrier that is situated between the two cellular components—fenestrated endothelial cells and interdigitated podocyte foot processes. The GBM is composed primarily of four types of extracellular matrix macromolecule—laminin-521, type IV collagen α3α4α5, the heparan sulphate proteoglycan agrin, and nidogen—which produce an interwoven meshwork thought to impart both size-selective and charge-selective properties. Although the composition and biochemical nature of the GBM have been known for a long time, the functional importance of the GBM versus that of podocytes and endothelial cells for establishing the glomerular filtration barrier to albumin is still debated. Together with findings from genetic studies in mice, the discoveries of four human mutations affecting GBM components in two inherited kidney disorders, Alport syndrome and Pierson syndrome, support essential roles for the GBM in glomerular permselectivity. Here, we explain in detail the proposed mechanisms whereby the GBM can serve as the major albumin barrier and discuss possible approaches to circumvent GBM defects associated with loss of permselectivity.",
"title": ""
},
{
"docid": "f489e2c0d6d733c9e2dbbdb1d7355091",
"text": "In many signal processing applications, the signals provided by the sensors are mixtures of many sources. The problem of separation of sources is to extract the original signals from these mixtures. A new algorithm, based on ideas of backpropagation learning, is proposed for source separation. No a priori information on the sources themselves is required, and the algorithm can deal even with non-linear mixtures. After a short overview of previous works in that eld, we will describe the proposed algorithm. Then, some experimental results will be discussed.",
"title": ""
},
{
"docid": "98d0a45eb8da2fa8541055014db6e238",
"text": "OBJECTIVE\nThe Multicultural Quality of Life Index is a concise instrument for comprehensive, culture-informed, and self-rated assessment of health-related quality of life. It is composed of 10 items (from physical well-being to global perception of quality of life). Each item is rated on a 10-point scale. The objective was to evaluate the reliability (test-retest), internal structure, discriminant validity, and feasibility of the Multicultural Quality of Life Index in Lima, Peru.\n\n\nMETHOD\nThe reliability was studied in general medical patients (n = 30) hospitalized in a general medical ward. The Multicultural Quality of Life Index was administered in two occasions and the correlation coefficients (\"r\") between both interviews were calculated. Its discriminant validity was studied statistically comparing the average score in a group of patients with AIDS (with presumed lower quality of life, n = 50) and the average score in a group of dentistry students and professionals (with presumed higher quality of life, n = 50). Data on its applicability and internal structure were compiled from the 130 subjects.\n\n\nRESULTS\nA high reliability correlation coefficient (r = 0.94) was found for the total score. The discriminant validity study found a significant difference between mean total score in the samples of presumed higher (7.66) and lower (5.32) quality of life. The average time to complete the Multicultural Quality of Life Index was less than 4 minutes and was reported by the majority of subjects as easily applicable. A high Cronbach's a (0.88) was also documented.\n\n\nCONCLUSIONS\nThe results reported that the Multicultural Quality of Life Index is reliable, has a high internal consistency, is capable of discriminating groups of presumed different quality of life levels, is quite efficient, and easy to use.",
"title": ""
},
{
"docid": "0305918adb88b4ca41b9257a556397a7",
"text": "We present the development and evaluation of a semantic analysis task that lies at the intersection of two very trendy lines of research in contemporary computational linguistics: (i) sentiment analysis, and (ii) natural language processing of social media text. The task was part of SemEval, the International Workshop on Semantic Evaluation, a semantic evaluation forum previously known as SensEval. P. Nakov Qatar Computing Research Institute, HBKU Tornado Tower, floor 10, P.O. box 5825, Doha, Qatar E-mail: pnakov@qf.org.qa S. Rosenthal Columbia University E-mail: sara@cs.columbia.edu S. Kiritchenko National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: Svetlana.Kiritchenko@nrc-cnrc.gc.ca S. Mohammad National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: saif.mohammad@nrc-cnrc.gc.ca Z. Kozareva USC Information Sciences Institute, 4676 Admiralty Way, Marina del Rey, CA 90292-6695 E-mail: zornitsa@kozareva.com A. Ritter The Ohio State University E-mail: aritter@cs.washington.edu V. Stoyanov Facebook E-mail: vesko.st@gmail.com X. Zhu National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: Xiaodan.Zhu@nrc-cnrc.gc.ca 2 Preslav Nakov et al. The task ran in 2013 and 2014, attracting the highest number of participating teams at SemEval in both years, and there is an ongoing edition in 2015. The task included the creation of a large contextual and message-level polarity corpus consisting of tweets, SMS messages, LiveJournal messages, and a special test set of sarcastic tweets. The evaluation attracted 44 teams in 2013 and 46 in 2014, who used a variety of approaches. The best teams were able to outperform several baselines by sizable margins with improvement across the two years the task has been run. We hope that the long-lasting role of this task and the accompanying datasets will be to serve as a test bed for comparing different approaches, thus facilitating research.",
"title": ""
},
{
"docid": "4c7c4e56dc0831c282e41bfd31c7f3c7",
"text": "Brown et al. (1993) introduced five unsupervised, word-based, generative and statistical models, popularized as IBM models, for translating a sentence into another. These models introduce alignments which maps each word in the source language to a word in the target language. In these models there is a crucial independence assumption that all lexical entries are seen independently of one another. We hypothesize that this independence assumption might be too strong, especially for languages with a large vocabulary, for example because of rich morphology. We investigate this independence assumption by implementing IBM models 1 and 2, the least complex IBM models, and also implementing a feature-rich version of these models. Through features, similarities between lexical entries in syntax and possibly even meaning can be captured. This feature-richness, however, requires a change in parameterization of the IBM model. We follow the approach of Berg-Kirkpatrick et al. (2010) and parameterize our IBM model with a log-linear parametric form. Finally, we compare the IBM models with their log-linear variants on word alignment. We evaluate our models on the quality of word alignments with two languages with a richer vocabulary than English. Our results do not fully support our hypothesis yet, but they are promising. We believe the hypothesis can be confirmed, however, there are still many technical challenges left before the log-linear variants can become competitive with the IBM models in terms of quality and speed.",
"title": ""
},
{
"docid": "d95c080140dd50d8131bc7d43a4358e2",
"text": "The link between affect, defined as the capacity for sentimental arousal on the part of a message, and virality, defined as the probability that it be sent along, is of significant theoretical and practical importance, e.g. for viral marketing. The basic measure of virality in Twitter is the probability of retweet and we are interested in which dimensions of the content of a tweet leads to retweeting. We hypothesize that negative news content is more likely to be retweeted, while for non-news tweets positive sentiments support virality. To test the hypothesis we analyze three corpora: A complete sample of tweets about the COP15 climate summit, a random sample of tweets, and a general text corpus including news. The latter allows us to train a classifier that can distinguish tweets that carry news and non-news information. We present evidence that negative sentiment enhances virality in the news segment, but not in the non-news segment. Our findings may be summarized ’If you want to be cited: Sweet talk your friends or serve bad news to the public’.",
"title": ""
},
{
"docid": "21df2b20c9ecd6831788e00970b3ca79",
"text": "Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called CloudNaaS. Customers can leverage CloudNaaS to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making CloudNaaS highly efficient. We evaluate an OpenFlow-based prototype of CloudNaaS and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures.",
"title": ""
}
] |
scidocsrr
|
f8da14a9bbb705e37e93285a0b1f93ea
|
RankMBPR: Rank-Aware Mutual Bayesian Personalized Ranking for Item Recommendation
|
[
{
"docid": "83a60460228ecc780848e40ab5286a31",
"text": "A ranking approach, ListRank-MF, is proposed for collaborative filtering that combines a list-wise learning-to-rank algorithm with matrix factorization (MF). A ranked list of items is obtained by minimizing a loss function that represents the uncertainty between training lists and output lists produced by a MF ranking model. ListRank-MF enjoys the advantage of low complexity and is analytically shown to be linear with the number of observed ratings for a given user-item matrix. We also experimentally demonstrate the effectiveness of ListRank-MF by comparing its performance with that of item-based collaborative recommendation and a related state-of-the-art collaborative ranking approach (CoFiRank).",
"title": ""
},
{
"docid": "d9615510bb6cf2cb2d8089be402c193c",
"text": "Tagging plays an important role in many recent websites. Recommender systems can help to suggest a user the tags he might want to use for tagging a specific item. Factorization models based on the Tucker Decomposition (TD) model have been shown to provide high quality tag recommendations outperforming other approaches like PageRank, FolkRank, collaborative filtering, etc. The problem with TD models is the cubic core tensor resulting in a cubic runtime in the factorization dimension for prediction and learning.\n In this paper, we present the factorization model PITF (Pairwise Interaction Tensor Factorization) which is a special case of the TD model with linear runtime both for learning and prediction. PITF explicitly models the pairwise interactions between users, items and tags. The model is learned with an adaption of the Bayesian personalized ranking (BPR) criterion which originally has been introduced for item recommendation. Empirically, we show on real world datasets that this model outperforms TD largely in runtime and even can achieve better prediction quality. Besides our lab experiments, PITF has also won the ECML/PKDD Discovery Challenge 2009 for graph-based tag recommendation.",
"title": ""
},
{
"docid": "d78acb79ccd229af7529dae1408dea6a",
"text": "Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the k-order statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present (i) a new variant that more accurately optimizes precision at k, and (ii) a novel procedure of optimizing the mean maximum rank, which we hypothesize is useful to more accurately cover all of the user's tastes. The general approach works by sampling N positive items, ordering them by the score assigned by the model, and then weighting the example as a function of this ordered set. Our approach is studied in two real-world systems, Google Music and YouTube video recommendations, where we obtain improvements for computable metrics, and in the YouTube case, increased user click through and watch duration when deployed live on www.youtube.com.",
"title": ""
}
] |
[
{
"docid": "351e2afb110d9304b5d534be45bf2fba",
"text": "BACKGROUND\nThe Lyon Diet Heart Study is a randomized secondary prevention trial aimed at testing whether a Mediterranean-type diet may reduce the rate of recurrence after a first myocardial infarction. An intermediate analysis showed a striking protective effect after 27 months of follow-up. This report presents results of an extended follow-up (with a mean of 46 months per patient) and deals with the relationships of dietary patterns and traditional risk factors with recurrence.\n\n\nMETHODS AND RESULTS\nThree composite outcomes (COs) combining either cardiac death and nonfatal myocardial infarction (CO 1), or the preceding plus major secondary end points (unstable angina, stroke, heart failure, pulmonary or peripheral embolism) (CO 2), or the preceding plus minor events requiring hospital admission (CO 3) were studied. In the Mediterranean diet group, CO 1 was reduced (14 events versus 44 in the prudent Western-type diet group, P=0.0001), as were CO 2 (27 events versus 90, P=0.0001) and CO 3 (95 events versus 180, P=0. 0002). Adjusted risk ratios ranged from 0.28 to 0.53. Among the traditional risk factors, total cholesterol (1 mmol/L being associated with an increased risk of 18% to 28%), systolic blood pressure (1 mm Hg being associated with an increased risk of 1% to 2%), leukocyte count (adjusted risk ratios ranging from 1.64 to 2.86 with count >9x10(9)/L), female sex (adjusted risk ratios, 0.27 to 0. 46), and aspirin use (adjusted risk ratios, 0.59 to 0.82) were each significantly and independently associated with recurrence.\n\n\nCONCLUSIONS\nThe protective effect of the Mediterranean dietary pattern was maintained up to 4 years after the first infarction, confirming previous intermediate analyses. Major traditional risk factors, such as high blood cholesterol and blood pressure, were shown to be independent and joint predictors of recurrence, indicating that the Mediterranean dietary pattern did not alter, at least qualitatively, the usual relationships between major risk factors and recurrence. Thus, a comprehensive strategy to decrease cardiovascular morbidity and mortality should include primarily a cardioprotective diet. It should be associated with other (pharmacological?) means aimed at reducing modifiable risk factors. Further trials combining the 2 approaches are warranted.",
"title": ""
},
{
"docid": "4f747c2fb562be4608d1f97ead32e00b",
"text": "With rapid development of the Internet, the web contents become huge. Most of the websites are publicly available and anyone can access the contents everywhere such as workplace, home and even schools. Nevertheless, not all the web contents are appropriate for all users, especially children. An example of these contents is pornography images which should be restricted to certain age group. Besides, these images are not safe for work (NSFW) in which employees should not be seen accessing such contents. Recently, convolutional neural networks have been successfully applied to many computer vision problems. Inspired by these successes, we propose a mixture of convolutional neural networks for adult content recognition. Unlike other works, our method is formulated on a weighted sum of multiple deep neural network models. The weights of each CNN models are expressed as a linear regression problem learnt using Ordinary Least Squares (OLS). Experimental results demonstrate that the proposed model outperforms both single CNN model and the average sum of CNN models in adult content recognition.",
"title": ""
},
{
"docid": "da3e4903974879868b87b94d7cc0bf21",
"text": "INTRODUCTION\nThe existence of maternal health service does not guarantee its use by women; neither does the use of maternal health service guarantee optimal outcomes for women. The World Health Organization recommends monitoring and evaluation of maternal satisfaction to improve the quality and efficiency of health care during childbirth. Thus, this study aimed at assessing maternal satisfaction on delivery service and factors associated with it.\n\n\nMETHODS\nCommunity based cross-sectional study was conducted in Debre Markos town from March to April 2014. Systematic random sampling technique were used to select 398 mothers who gave birth within one year. The satisfaction of mothers was measured using 19 questions which were adopted from Donabedian quality assessment framework. Binary logistic regression was fitted to identify independent predictors.\n\n\nRESULT\nAmong mothers, the overall satisfaction on delivery service was found to be 318 (81.7%). Having plan to deliver at health institution (AOR = 3.30, 95% CI: 1.38-7.9) and laboring time of less than six hours (AOR = 4.03, 95% CI: 1.66-9.79) were positively associated with maternal satisfaction on delivery service. Those mothers who gave birth using spontaneous vaginal delivery (AOR = 0.11, 95% CI: 0.023-0.51) were inversely related to maternal satisfaction on delivery service.\n\n\nCONCLUSION\nThis study revealed that the overall satisfaction of mothers on delivery service was found to be suboptimal. Reasons for delivery visit, duration of labor, and mode of delivery are independent predictors of maternal satisfaction. Thus, there is a need of an intervention on the independent predictors.",
"title": ""
},
{
"docid": "a75a1d34546faa135f74aa5e6142de05",
"text": "Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the weak to strong learning guarantees for the existing gradient boosting work for strongly-smooth, strongly-convex objectives under this new measure of performance, and also demonstrate that this work fails for non-smooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.",
"title": ""
},
{
"docid": "fba0ff24acbe07e1204b5fe4c492ab72",
"text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.",
"title": ""
},
{
"docid": "33cd162dc2c0132dbd4153775a569c5d",
"text": "The question whether preemptive systems are better than non-preemptive systems has been debated for a long time, but only partial answers have been provided in the real-time literature and still some issues remain open. In fact, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. In particular, limiting preemptions allows increasing program locality, making timing analysis more predictable with respect to the fully preemptive case. In this paper, we integrate the features of both preemptive and non-preemptive scheduling by considering that each task can switch to non-preemptive mode, at any time, for a bounded interval. Three methods (with different complexity and performance) are presented to calculate the longest non-preemptive interval that can be executed by each task, under fixed priorities, without degrading the schedulability of the task set, with respect to the fully preemptive case. The methods are also compared by simulations to evaluate their effectiveness in reducing the number of preemptions.",
"title": ""
},
{
"docid": "ffffbbd82482e39a1a32bd1c5848a861",
"text": "For a sustainable integration of wind power into the electricity grid, precise and robust predictions are required. With increasing installed capacity and changing energy markets, there is a growing demand for short-term predictions. Machine learning methods can be used as a purely data-driven, spatio-temporal prediction model that yields better results than traditional physical models based on weather simulations. However, there are two big challenges when applying machine learning techniques to the domain of wind power predictions. First, when applying state-of-the-art algorithms to big training data sets, the required computation times may increase to an unacceptable level. Second, the prediction performance and reliability have to be improved to cope with the requirements of the energy markets. This thesis proposes a robust and practical prediction framework based on heterogeneous machine learning ensembles. Ensemble models combine the predictions of numerous and preferably diverse models to reduce the prediction error. First, homogeneous ensemble regressors that employ a single base algorithm are analyzed. Further, the construction of heterogeneous ensembles is proposed. These models employ multiple base algorithms and benefit from a gain of diversity among the combined predictors. A comprehensive experimental evaluation shows that the combination of different techniques to an ensemble outperforms state-ofthe-art prediction models while requiring a shorter runtime. Finally, a framework for model selection based on evolutionary multi-objective optimization is presented. The method offers an efficient and comfortable balancing of a preferably low prediction error and a moderate computational cost.",
"title": ""
},
{
"docid": "aa55e655c7fa8c86d189d03c01d5db87",
"text": "Best practice reference models like COBIT, ITIL, and CMMI offer methodical support for the various tasks of IT management and IT governance. Observations reveal that the ways of using these models as well as the motivations and further aspects of their application differ significantly. Rather the models are used in individual ways due to individual interpretations. From an academic point of view we can state, that how these models are actually used as well as the motivations using them is not well understood. We develop a framework in order to structure different dimensions and modes of reference model application in practice. The development is based on expert interviews and a literature review. Hence we use design oriented and qualitative research methods to develop an artifact, a ‘framework of reference model application’. This framework development is the first step in a larger research program which combines different methods of research. The first goal is to deepen insight and improve understanding. In future research, the framework will be used to survey and analyze reference model application. The authors assume that “typical” application patterns exist beyond individual dimensions of application. The framework developed provides an opportunity of a systematically collection of data thereon. Furthermore, the so far limited knowledge of reference model application complicates their implementation as well as their use. Thus, detailed knowledge of different application patterns is required for effective support of enterprises using reference models. We assume that the deeper understanding of different patterns will support method development for implementation and use.",
"title": ""
},
{
"docid": "30bc7923529eec5ac7d62f91de804f8e",
"text": "In this paper, we consider the scene parsing problem and propose a novel MultiPath Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPFRNN propagates the contextual features learned at top layer through weighted recurrent connections to multiple bottom layers to help them learn better features with such “hindsight”. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.",
"title": ""
},
{
"docid": "37825cd0f6ae399204a392e3b32a667b",
"text": "Abduction is inference to the best explanation. Abduction has long been studied intensively in a wide range of contexts, from artificial intelligence research to cognitive science. While recent advances in large-scale knowledge acquisition warrant applying abduction with large knowledge bases to real-life problems, as of yet no existing approach to abduction has achieved both the efficiency and formal expressiveness necessary to be a practical solution for large-scale reasoning on real-life problems. The contributions of our work are the following: (i) we reformulate abduction as an Integer Linear Programming (ILP) optimization problem, providing full support for first-order predicate logic (FOPL); (ii) we employ Cutting Plane Inference, which is an iterative optimization strategy developed in Operations Research for making abductive reasoning in full-fledged FOPL tractable, showing its efficiency on a real-life dataset; (iii) the abductive inference engine presented in this paper is made publicly available.",
"title": ""
},
{
"docid": "b06a22f8d9eb96db06f22544d39a917a",
"text": "Attaching meaning to arbitrary symbols (i.e. words) is a complex and lengthy process. In the case of numbers, it was previously suggested that this process is grounded on two early pre-verbal systems for numerical quantification: the approximate number system (ANS or 'analogue magnitude'), and the object tracking system (OTS or 'parallel individuation'), which children are equipped with before symbolic learning. Each system is based on dedicated neural circuits, characterized by specific computational limits, and each undergoes a separate developmental trajectory. Here, I review the available cognitive and neuroscientific data and argue that the available evidence is more consistent with a crucial role for the ANS, rather than for the OTS, in the acquisition of abstract numerical concepts that are uniquely human.",
"title": ""
},
{
"docid": "a889235a17e8688773ef2dd242bc4a15",
"text": "Software for safety-critical systems has to deal w ith the hazards identified by safety analysis in order to make the system safe, risk-free and fai l-safe. Software safety is a composite of many factors. Problem statement: Existing software quality models like McCall’s and Boehm’s and ISO 9126 were inadequate in addressing the software saf ety issues of real time safety-critical embedded systems. At present there does not exist any standa rd framework that comprehensively addresses the Factors, Criteria and Metrics (FCM) approach of the quality models in respect of software safety. Approach: We proposed a new model for software safety based on the McCall’s software quality model that specifically identifies the criteria cor responding to software safety in safety critical applications. The criteria in the proposed software safety model pertains to system hazard analysis, completeness of requirements, identification of sof tware-related safety-critical requirements, safetyconstraints based design, run-time issues managemen t and software safety-critical testing. Results: This model was applied to a prototype safety-critical so ftware-based Railroad Crossing Control System (RCCS). The results showed that all critical operat ions were safe and risk-free, capable of handling contingency situations. Conclusion: Development of a safety-critical system based on ou r proposed software safety model significantly enhanced the sa f operation of the overall system.",
"title": ""
},
{
"docid": "02cd879a83070af9842999c7215e7f92",
"text": "Automatic genre classification of music is an important topic in Music Information Retrieval with many interesting applications. A solution to genre classification would allow for machine tagging of songs, which could serve as metadata for building song recommenders. In this paper, we investigate the following question: Given a song, can we automatically detect its genre? We look at three characteristics of a song to determine its genre: timbre, chord transitions, and lyrics. For each method, we develop multiple data models and apply supervised machine learning algorithms including k-means, k-NN, multi-class SVM and Naive Bayes. We are able to accurately classify 65− 75% of the songs from each genre in a 5-genre classification problem between Rock, Jazz, Pop, Hip-Hop, and Metal music.",
"title": ""
},
{
"docid": "2ccbe363a448e796ad7a93d819d12444",
"text": "With the ever-growing performance gap between memory systems and disks, and rapidly improving CPU performance, virtual memory (VM) management becomes increasingly important for overall system performance. However, one of its critical components, the page replacement policy, is still dominated by CLOCK, a replacement policy developed almost 40 years ago. While pure LRU has an unaffordable cost in VM, CLOCK simulates the LRU replacement algorithm with a low cost acceptable in VM management. Over the last three decades, the inability of LRU as well as CLOCK to handle weak locality accesses has become increasingly serious, and an effective fix becomes increasingly desirable. Inspired by our I/O buffer cache replacement algorithm, LIRS [13], we propose an improved CLOCK replacement policy, called CLOCK-Pro. By additionally keeping track of a limited number of replaced pages, CLOCK-Pro works in a similar fashion as CLOCK with a VM-affordable cost. Furthermore, it brings all the much-needed performance advantages from LIRS into CLOCK. Measurements from an implementation of CLOCK-Pro in Linux Kernel 2.4.21 show that the execution times of some commonly used programs can be reduced by up to 47%.",
"title": ""
},
{
"docid": "d8d91ea6fe6ce56a357a9b716bdfe849",
"text": "Over the last years, automatic music classification has become a standard benchmark problem in the machine learning community. This is partly due to its inherent difficulty, and also to the impact that a fully automated classification system can have in a commercial application. In this paper we test the efficiency of a relatively new learning tool, Extreme Learning Machines (ELM), for several classification tasks on publicly available song datasets. ELM is gaining increasing attention, due to its versatility and speed in adapting its internal parameters. Since both of these attributes are fundamental in music classification, ELM provides a good alternative to standard learning models. Our results support this claim, showing a sustained gain of ELM over a feedforward neural network architecture. In particular, ELM provides a great decrease in computational training time, and has always higher or comparable results in terms of efficiency.",
"title": ""
},
{
"docid": "3101cfeb496db290c82b6c6650cb4a02",
"text": "Autophagy, a catabolic pathway that delivers cellular components to lysosomes for degradation, can be activated by stressful conditions such as nutrient starvation and endoplasmic reticulum (ER) stress. We report that thapsigargin, an ER stressor widely used to induce autophagy, in fact blocks autophagy. Thapsigargin does not affect autophagosome formation but leads to accumulation of mature autophagosomes by blocking autophagosome fusion with the endocytic system. Strikingly, thapsigargin has no effect on endocytosis-mediated degradation of epidermal growth factor receptor. Molecularly, while both Rab7 and Vps16 are essential regulatory components for endocytic fusion with lysosomes, we found that Rab7 but not Vps16 is required for complete autophagy flux, and that thapsigargin blocks recruitment of Rab7 to autophagosomes. Therefore, autophagosomal-lysosomal fusion must be governed by a distinct molecular mechanism compared to general endocytic fusion.",
"title": ""
},
{
"docid": "82031adaa42f7043a6bf5e44bfa72597",
"text": "In this paper, we study the problem of non-Bayesian learning over social networks by taking an axiomatic approach. As our main behavioral assumption, we postulate that agents follow social learning rules that satisfy imperfect recall, according to which they treat the current beliefs of their neighbors as sufficient statistics for all the information available to them. We establish that as long as imperfect recall represents the only point of departure from Bayesian rationality, agents’ social learning rules take a log-linear form. Our approach also enables us to provide a taxonomy of behavioral assumptions that underpin various non-Bayesian models of learning, including the canonical model of DeGroot. We then show that for a fairly large class of learning rules, the form of bounded rationality represented by imperfect recall is not an impediment to asymptotic learning, as long as agents assign weights of equal orders of magnitude to every independent piece of information. Finally, we show how the dispersion of information among different individuals in the social network determines the rate of learning.",
"title": ""
},
{
"docid": "a3aad879ca5f7e7683c1377e079c4726",
"text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods including Vector Space Methods (VSMs) such as Latent Semantic Analysis (LSA), generative text models such as topic models, matrix factorization, neural nets, and energy-based models. Many of these use nonlinear operations on co-occurrence statistics, such as computing Pairwise Mutual Information (PMI). Some use hand-tuned hyperparameters and term reweighting. Often a generative model can help provide theoretical insight into such modeling choices, but there appears to be no such model to “explain” the above nonlinear models. For example, we know of no generative model for which the correct solution is the usual (dimension-restricted) PMI model. This paper gives a new generative model, a dynamic version of the loglinear topic model of Mnih and Hinton (2007), as well as a pair of training objectives called RAND-WALK to compute word embeddings. The methodological novelty is to use the prior to compute closed form expressions for word statistics. These provide an explanation for the PMI model and other recent models, as well as hyperparameter choices. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are spatially isotropic. The model also helps explain why linear algebraic structure arises in low-dimensional semantic embeddings. Such structure has been used to solve analogy tasks by Mikolov et al. (2013a) and many subsequent papers. This theoretical explanation is to give an improved analogy solving method that improves success rates on analogy solving by a few percent.",
"title": ""
},
{
"docid": "ec5bdd52fa05364923cb12b3ff25a49f",
"text": "A system to prevent subscription fraud in fixed telecommunications with high impact on long-distance carriers is proposed. The system consists of a classification module and a prediction module. The classification module classifies subscribers according to their previous historical behavior into four different categories: subscription fraudulent, otherwise fraudulent, insolvent and normal. The prediction module allows us to identify potential fraudulent customers at the time of subscription. The classification module was implemented using fuzzy rules. It was applied to a database containing information of over 10,000 real subscribers of a major telecom company in Chile. In this database, a subscription fraud prevalence of 2.2% was found. The prediction module was implemented as a multilayer perceptron neural network. It was able to identify 56.2% of the true fraudsters, screening only 3.5% of all the subscribers in the test set. This study shows the feasibility of significantly preventing subscription fraud in telecommunications by analyzing the application information and the customer antecedents at the time of application. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cb8dbf14b79edd2a3ee045ad08230a30",
"text": "Observational data suggest a link between menaquinone (MK, vitamin K2) intake and cardiovascular (CV) health. However, MK intervention trials with vascular endpoints are lacking. We investigated long-term effects of MK-7 (180 µg MenaQ7/day) supplementation on arterial stiffness in a double-blind, placebo-controlled trial. Healthy postmenopausal women (n=244) received either placebo (n=124) or MK-7 (n=120) for three years. Indices of local carotid stiffness (intima-media thickness IMT, Diameter end-diastole and Distension) were measured by echotracking. Regional aortic stiffness (carotid-femoral and carotid-radial Pulse Wave Velocity, cfPWV and crPWV, respectively) was measured using mechanotransducers. Circulating desphospho-uncarboxylated matrix Gla-protein (dp-ucMGP) as well as acute phase markers Interleukin-6 (IL-6), high-sensitive C-reactive protein (hsCRP), tumour necrosis factor-α (TNF-α) and markers for endothelial dysfunction Vascular Cell Adhesion Molecule (VCAM), E-selectin, and Advanced Glycation Endproducts (AGEs) were measured. At baseline dp-ucMGP was associated with IMT, Diameter, cfPWV and with the mean z-scores of acute phase markers (APMscore) and of markers for endothelial dysfunction (EDFscore). After three year MK-7 supplementation cfPWV and the Stiffness Index βsignificantly decreased in the total group, whereas distension, compliance, distensibility, Young's Modulus, and the local carotid PWV (cPWV) improved in women having a baseline Stiffness Index β above the median of 10.8. MK-7 decreased dp-ucMGP by 50 % compared to placebo, but did not influence the markers for acute phase and endothelial dysfunction. In conclusion, long-term use of MK-7 supplements improves arterial stiffness in healthy postmenopausal women, especially in women having a high arterial stiffness.",
"title": ""
}
] |
scidocsrr
|
21b49bdfb29c3c05db340d50e98e7fb6
|
SWRL Rule Editor - A Web Application as Rich as Desktop Business Rule Editors
|
[
{
"docid": "0cf7ebc02a8396a615064892d9ee6f22",
"text": "With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily. 1 Evolution of Ontology Evolution Acceptance of ontologies as an integral part of knowledge-intensive applications has been growing steadily. The word ontology became a recognized substrate in fields outside the computer science, from bioinformatics to intelligence analysis. With such acceptance, came the use of ontologies in industrial systems and active publishing of ontologies on the (Semantic) Web. More and more often, developing an ontology is not a project undertaken by a single person or a small group of people in a research laboratory, but rather it is a large project with numerous participants, who are often geographically distributed, where the resulting ontologies are used in production environments with paying customers counting on robustness and reliability of the system. The Protégé ontology-development environment1 has become a widely used tool for developing ontologies, with more than 50,000 registered users. The Protégé group works closely with some of the tool’s users and we have a continuous stream of requests from them on the features that they would like to have supported in terms of managing and developing ontologies collaboratively. The configurations for collaborative development differ significantly however. For instance, Perot Systems2 uses a client–server mode of Protégé with multiple users simultaneously accessing the same copy of the ontology on the server. The NCI Center for Bioinformatics, which develops the NCI The1 http://protege.stanford.edu 2 http://www.perotsystems.com saurus3 has a different configuration: a baseline version of the Thesaurus is published regularly and between the baselines, multiple editors work asynchronously on their own versions. At the end of the cycle, the changes are reconciled. In the OBO project,4 ontology developers post their ontologies on a sourceforge site, using the sourceforge version-control system to publish successive versions. In addition to specific requirements to support each of these collaboration models, users universally request the ability to annotate their changes, to hold discussions about the changes, to see the change history with respective annotations, and so on. When developing tool support for all the different modes and tasks in the process of ontology evolution, we started with separate and unrelated sets of Protégé plugins that supported each of the collaborative editing modes. This approach, however, was difficult to maintain; besides, we saw that tools developed for one mode (such as change annotation) will be useful in other modes. Therefore, we have developed a single unified framework that is flexible enough to work in either synchronous or asynchronous mode, in those environments where Protégé and our plugins are used to track changes and in those environments where there is no record of the change steps. At the center of the system is a Change and Annotation Ontology (CHAO) with instances recording specific changes and meta-information about them (author, timestamp, annotations, acceptance status, etc.). When Protégé and its change-management plugins are used for ontology editing, these tools create CHAO instances as a side product of the editing process. Otherwise, the CHAO instances are created from a structural diff produced by comparing two versions. The CHAO instances then drive the user interface that displays changes between versions to a user, allows him to accept and reject changes, to view concept history, to generate a new baseline, to publish a history of changes that other applications can use, and so on. This paper makes the following contributions: – analysis and categorization of different scenarios for ontology maintenance and evolution and their functional requirements (Section 2) – development of a comprehensive solution that addresses most of the functional requirements from the different scenarios in a single unified framework (Section 3) – implementation of the solution as a set of open-source Protégé plugins (Section 4) 2 Ontology-Evolution Scenarios and Tasks We will now discuss different scenarios for ontology maintenance and evolution, their attributes, and functional requirements.",
"title": ""
}
] |
[
{
"docid": "4d0b163e7c4c308696fa5fd4d93af894",
"text": "Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by handengineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging highdimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.",
"title": ""
},
{
"docid": "c122a50d90e9f4834f36a19ba827fa9f",
"text": "Cancers are able to grow by subverting immune suppressive pathways, to prevent the malignant cells as being recognized as dangerous or foreign. This mechanism prevents the cancer from being eliminated by the immune system and allows disease to progress from a very early stage to a lethal state. Immunotherapies are newly developing interventions that modify the patient's immune system to fight cancer, by either directly stimulating rejection-type processes or blocking suppressive pathways. Extracellular adenosine generated by the ectonucleotidases CD39 and CD73 is a newly recognized \"immune checkpoint mediator\" that interferes with anti-tumor immune responses. In this review, we focus on CD39 and CD73 ectoenzymes and encompass aspects of the biochemistry of these molecules as well as detailing the distribution and function on immune cells. Effects of CD39 and CD73 inhibition in preclinical and clinical studies are discussed. Finally, we provide insights into potential clinical application of adenosinergic and other purinergic-targeting therapies and forecast how these might develop in combination with other anti-cancer modalities.",
"title": ""
},
{
"docid": "94784bc9f04dbe5b83c2a9f02e005825",
"text": "The optical code division multiple access (OCDMA), the most advanced multiple access technology in optical communication has become significant and gaining popularity because of its asynchronous access capability, faster speed, efficiency, security and unlimited bandwidth. Many codes are developed in spectral amplitude coding optical code division multiple access (SAC-OCDMA) with zero or minimum cross-correlation properties to reduce the multiple access interference (MAI) and Phase Induced Intensity Noise (PIIN). This paper compares two novel SAC-OCDMA codes in terms of their performances such as bit error rate (BER), number of active users that is accommodated with minimum cross-correlation property, high data rate that is achievable and the minimum power that the OCDMA system supports to achieve a minimum BER value. One of the proposed novel codes referred in this work as modified random diagonal code (MRDC) possesses cross-correlation between zero to one and the second novel code referred in this work as modified new zero cross-correlation code (MNZCC) possesses cross-correlation zero to further minimize the multiple access interference, which are found to be more scalable compared to the other existing SAC-OCDMA codes. In this work, the proposed MRDC and MNZCC codes are implemented in an optical system using the optisystem version-12 software for the SAC-OCDMA scheme. Simulation results depict that the OCDMA system based on the proposed novel MNZCC code exhibits better performance compared to the MRDC code and former existing SAC-OCDMA codes. The proposed MNZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower BER and longer traveling distance without any signal quality degradation as compared to the former existing SAC-OCDMA codes.",
"title": ""
},
{
"docid": "4253afeaeb2f238339611e5737ed3e06",
"text": "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.",
"title": ""
},
{
"docid": "470810494ae81cc2361380c42116c8d7",
"text": "Sustainability is significantly important for fashion business due to consumers’ increasing awareness of environment. When a fashion company aims to promote sustainability, the main linkage is to develop a sustainable supply chain. This paper contributes to current knowledge of sustainable supply chain in the textile and clothing industry. We first depict the structure of sustainable fashion supply chain including eco-material preparation, sustainable manufacturing, green distribution, green retailing, and ethical consumers based on the extant literature. We study the case of the Swedish fast fashion company, H&M, which has constructed its sustainable supply chain in developing eco-materials, providing safety training, monitoring sustainable manufacturing, reducing carbon emission in distribution, and promoting eco-fashion. Moreover, based on the secondary data and analysis, we learn the lessons of H&M’s sustainable fashion supply chain from the country perspective: (1) the H&M’s sourcing managers may be more likely to select suppliers in the countries with lower degrees of human wellbeing; (2) the H&M’s supply chain manager may set a higher level of inventory in a country with a higher human wellbeing; and (3) the H&M CEO may consider the degrees of human wellbeing and economic wellbeing, instead of environmental wellbeing when launching the online shopping channel in a specific country.",
"title": ""
},
{
"docid": "b3cca9ebe524e4d0252289ecca8528b7",
"text": "Convolutional neural nets (CNNs) have become a practical means to perform vision tasks, particularly in the area of image classification. FPGAs are well known to be able to perform convolutions efficiently, however, most recent efforts to run CNNs on FPGAs have shown limited advantages over other devices such as GPUs. Previous approaches on FPGAs have often been memory bound due to the limited external memory bandwidth on the FPGA device. We show a novel architecture written in OpenCL, which we refer to as a Deep Learning Accelerator (DLA), that maximizes data reuse and minimizes external memory bandwidth. Furthermore, we show how we can use the Winograd transform to significantly boost the performance of the FPGA. As a result, when running our DLA on Intel’s Arria 10 device we can achieve a performance of 1020img/s, or 23img/s/W when running the AlexNet CNN benchmark. This comes to 1382 GFLOPs and is 10x faster with 8.4x more GFLOPS and 5.8x better efficiency than the state-of-the-art on FPGAs. Additionally, 23 img/s/W is competitive against the best publicly known implementation of AlexNet on nVidia’s TitanX GPU.",
"title": ""
},
{
"docid": "f3e5941be4543d5900d56c1a7d93d0ea",
"text": "These working notes summarize the different approaches we have explored in order to classify a corpus of tweets related to the 2015 Spanish General Election (COSET 2017 task from IberEval 2017). Two approaches were tested during the COSET 2017 evaluations: Neural Networks with Sentence Embeddings (based on TensorFlow) and N-gram Language Models (based on SRILM). Our results with these approaches were modest: both ranked above the “Most frequent baseline”, but below the “Bag-of-words + SVM” baseline. A third approach was tried after the COSET 2017 evaluation phase was over: Advanced Linear Models (based on fastText). Results measured over the COSET 2017 Dev and Test show that this approach is well above the “TF-IDF+RF” baseline.",
"title": ""
},
{
"docid": "1c9dd9b98b141e87ca7b74e995630456",
"text": "Transportation systems in mega-cities are often affected by various kinds of events such as natural disasters, accidents, and public gatherings. Highly dense and complicated networks in the transportation systems propagate confusion in the network because they offer various possible transfer routes to passengers. Visualization is one of the most important techniques for examining such cascades of unusual situations in the huge networks. This paper proposes visual integration of traffic analysis and social media analysis using two forms of big data: smart card data on the Tokyo Metro and social media data on Twitter. Our system provides multiple coordinated views to visually, intuitively, and simultaneously explore changes in passengers' behavior and abnormal situations extracted from smart card data and situational explanations from real voices of passengers such as complaints about services extracted from social media data. We demonstrate the possibilities and usefulness of our novel visualization environment using a series of real data case studies and domain experts' feedbacks about various kinds of events.",
"title": ""
},
{
"docid": "d8748f3c6192e0e2fe3cdb9b745ef703",
"text": "In this paper, we consider a method for computing the similarity of executable files, based on opcode graphs. We apply this technique to the challenging problem of metamorphic malware detection and compare the results to previous work based on hidden Markov models. In addition, we analyze the effect of various morphing techniques on the success of our proposed opcode graph-based detection scheme.",
"title": ""
},
{
"docid": "13fbd264cf1f515c0ad6ebb30644e32e",
"text": "This article presents a new model that accounts for working memory spans in adults, the time-based resource-sharing model. The model assumes that both components (i.e., processing and maintenance) of the main working memory tasks require attention and that memory traces decay as soon as attention is switched away. Because memory retrievals are constrained by a central bottleneck and thus totally capture attention, it was predicted that the maintenance of the items to be recalled depends on both the number of memory retrievals required by the intervening treatment and the time allowed to perform them. This number of retrievals:time ratio determines the cognitive load of the processing component. The authors show in 7 experiments that working memory spans vary as a function of this cognitive load.",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "ee4c10d53be10ed1a68e85e6a8a14f31",
"text": "1 Center for Manufacturing Research, Tennessee Technological University (TTU), Cookeville, TN 38505, USA 2 Department of Electrical and Computer Engineering, Tennessee Technological University (TTU), Cookeville, TN 38505, USA 3 Panasonic Princeton Laboratory (PPRL), Panasonic R&D Company of America, 2 Research Way, Princeton, NJ 08540, USA 4 Network Development Center, Matsushita Electric Industrial Co., Ltd., 4-12-4 Higashi-shinagawa, Shinagawa-ku, Tokyo 140-8587, Japan",
"title": ""
},
{
"docid": "2157b222a73c176ca9e54258b3a531fe",
"text": "A switched-capacitor bias that provides a constant Gm-C characteristic over process and temperature variation is presented. The bias can be adapted for use with subthreshold circuits, or circuits in strong inversion. It uses eight transistors, five switches, and three capacitors, and performs with supply voltages less than 0.9 V. Theoretical output current is derived, and stability analysis is performed. Simulated results showing an op-amp with very consistent pulse response are presented",
"title": ""
},
{
"docid": "acc26655abb2a181034db8571409d0a5",
"text": "In this paper, a new optimization approach is designed for convolutional neural network (CNN) which introduces explicit logical relations between filters in the convolutional layer. In a conventional CNN, the filters’ weights in convolutional layers are separately trained by their own residual errors, and the relations of these filters are not explored for learning. Different from the traditional learning mechanism, the proposed correlative filters (CFs) are initiated and trained jointly in accordance with predefined correlations, which are efficient to work cooperatively and finally make a more generalized optical system. The improvement in CNN performance with the proposed CF is verified on five benchmark image classification datasets, including CIFAR-10, CIFAR-100, MNIST, STL-10, and street view house number. The comparative experimental results demonstrate that the proposed approach outperforms a number of state-of-the-art CNN approaches.",
"title": ""
},
{
"docid": "5259c7d1c7b05050596f6667aa262e11",
"text": "We propose a novel approach to automatic detection and tracking of people taking different poses in cluttered and dynamic environments using a single RGB-D camera. The original RGB-D pixels are transformed to a novel point ensemble image (PEI), and we demonstrate that human detection and tracking in 3D space can be performed very effectively with this new representation. The detector in the first phase quickly locates human physiquewise plausible candidates, which are then further carefully filtered in a supervised learning and classification second phase. Joint statistics of color and height are computed for data association to generate final 3D motion trajectories of tracked individuals. Qualitative and quantitative experimental results obtained on the publicly available office dataset, mobile camera dataset and the real-world clothing store dataset we created show very promising results. © 2014 Elsevier B.V. All rights reserved. d T b r a e w c t e i c a i c p p g w e h",
"title": ""
},
{
"docid": "13bfb20823bb45feeac5fbcc9a552eaa",
"text": "Facial landmark localisation in images captured in-the-wild is an important and challenging problem. The current state-of-the-art revolves around certain kinds of Deep Convolutional Neural Networks (DCNNs) such as stacked U-Nets and Hourglass networks. In this work, we innovatively propose stacked dense U-Nets for this task. We design a novel scale aggregation network topology structure and a channel aggregation building block to improve the model’s capacity without sacrificing the computational complexity and model size. With the assistance of deformable convolutions inside the stacked dense U-Nets and coherent loss for outside data transformation, our model obtains the ability to be spatially invariant to arbitrary input face images. Extensive experiments on many in-the-wild datasets, validate the robustness of the proposed method under extreme poses, exaggerated expressions and heavy occlusions. Finally, we show that accurate 3D face alignment can assist pose-invariant face recognition where we achieve a new stateof-the-art accuracy on CFP-FP (98.514%).",
"title": ""
},
{
"docid": "b4c25df52a0a5f6ab23743d3ca9a3af2",
"text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.",
"title": ""
},
{
"docid": "209842e00957d1d1786008d943895dc9",
"text": "The impact that urban green spaces have on sustainability and quality of life is phenomenal. This is also true for the local South African environment. However, in reality green spaces in urban environments are decreasing due to growing populations, increasing urbanization and development pressure. This further impacts on the provision of child-friendly spaces, a concept that is already limited in local context. Child-friendly spaces are described as environments in which people (children) feel intimately connected to, influencing the physical, social, emotional, and ecological health of individuals and communities. The benefits of providing such spaces for the youth are well documented in literature. This research therefore aimed to investigate the concept of childfriendly spaces and its applicability to the South African planning context, in order to guide the planning of such spaces for future communities and use. Child-friendly spaces in the urban environment of the city of Durban, was used as local case study, along with two international case studies namely Mullerpier public playground in Rotterdam, the Netherlands, and Kadidjiny Park in Melville, Australia. The aim was to determine how these spaces were planned and developed and to identify tools that were used to accomplish the goal of providing successful child-friendly green spaces within urban areas. The need and significance of planning for such spaces was portrayed within the international case studies. It is confirmed that minimal provision is made for green space planning within the South African context, when there is reflected on the international examples. As a result international examples and disciples of providing child-friendly green spaces should direct planning guidelines within local context. The research concluded that childfriendly green spaces have a positive impact on the urban environment and assist in a child’s development and interaction with the natural environment. Regrettably, the planning of these childfriendly spaces is not given priority within current spatial plans, despite the proven benefits of such. Keywords—Built environment, child-friendly spaces, green spaces. public places, urban area. E. J. Cilliers is a Professor at the North West University, Unit for Environmental Sciences and Management, Urban and Regional Planning, Potchestroom, 2531, South Africa (e-mail: juanee.cilliers@nwu.ac.za). Z. Goosen is a PhD student with the North West University, Unit for Environmental Sciences and Management, Urban and Regional Planning, Potchestroom, 2531, South Africa (e-mail: goosenzhangoosen@gmail.com). This research (or parts thereof) was made possible by the financial contribution of the NRF (National Research Foundation) South Africa. The opinions, findings and conclusions or recommendations expressed in this material are those of the authors and therefore the NRF does not accept any liability in regard thereto.",
"title": ""
},
{
"docid": "319285416d58c9b2da618bb6f0c8021c",
"text": "Facial expression analysis is one of the popular fields of research in human computer interaction (HCI). It has several applications in next generation user interfaces, human emotion analysis, behavior and cognitive modeling. In this paper, a facial expression classification algorithm is proposed which uses Haar classifier for face detection purpose, Local Binary Patterns(LBP) histogram of different block sizes of a face image as feature vectors and classifies various facial expressions using Principal Component Analysis (PCA). The algorithm is implemented in real time for expression classification since the computational complexity of the algorithm is small. A customizable approach is proposed for facial expression analysis, since the various expressions and intensity of expressions vary from person to person. The system uses grayscale frontal face images of a person to classify six basic emotions namely happiness, sadness, disgust, fear, surprise and anger.",
"title": ""
},
{
"docid": "f40125e7cc8279a5514deaf1146684de",
"text": "Summary Several models explain how a complex integrated system like the rodent mandible can arise from multiple developmental modules. The models propose various integrating mechanisms, including epigenetic effects of muscles on bones. We test five for their ability to predict correlations found in the individual (symmetric) and fluctuating asymmetric (FA) components of shape variation. We also use exploratory methods to discern patterns unanticipated by any model. Two models fit observed correlation matrices from both components: (1) parts originating in same mesenchymal condensation are integrated, (2) parts developmentally dependent on the same muscle form an integrated complex as do those dependent on teeth. Another fits the correlations observed in FA: each muscle insertion site is an integrated unit. However, no model fits well, and none predicts the complex structure found in the exploratory analyses, best described as a reticulated network. Furthermore, no model predicts the correlation between proximal parts of the condyloid and coronoid, which can exceed the correlations between proximal and distal parts of the same process. Additionally, no model predicts the correlation between molar alveolus and ramus and/or angular process, one of the highest correlations found in the FA component. That correlation contradicts the basic premise of all five developmental models, yet it should be anticipated from the epigenetic effects of mastication, possibly the primary morphogenetic process integrating the jaw coupling forces generated by muscle contraction with those experienced at teeth.",
"title": ""
}
] |
scidocsrr
|
4e85e23c295c2b4231d8cc5413816cff
|
Image Processing Techniques for Detection of Leaf Disease
|
[
{
"docid": "058515182c568c8df202542f28c15203",
"text": "Plant diseases have turned into a dilemma as it can cause significant reduction in both quality and quantity of agricultural products. Automatic detection of plant diseases is an essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect the symptoms of diseases as soon as they appear on plant leaves. The proposed system is a software solution for automatic detection and classification of plant leaf diseases. The developed processing scheme consists of four main steps, first a color transformation structure for the input RGB image is created, then the green pixels are masked and removed using specific threshold value followed by segmentation process, the texture statistics are computed for the useful segments, finally the extracted features are passed through the classifier. The proposed algorithm’s efficiency can successfully detect and classify the examined diseases with an accuracy of 94%. Experimental results on a database of about 500 plant leaves confirm the robustness of the proposed approach.",
"title": ""
},
{
"docid": "9aa3a9b8fb22ba929146298386ca9e57",
"text": "Since current grading of plant diseases is mainly based on eyeballing, a new method is developed based on computer image processing. All influencing factors existed in the process of image segmentation was analyzed and leaf region was segmented by using Otsu method. In the HSI color system, H component was chosen to segment disease spot to reduce the disturbance of illumination changes and the vein. Then, disease spot regions were segmented by using Sobel operator to examine disease spot edges. Finally, plant diseases are graded by calculating the quotient of disease spot and leaf areas. Researches indicate that this method to grade plant leaf spot diseases is fast and accurate.",
"title": ""
},
{
"docid": "1b60ded506c85edd798fe0759cce57fa",
"text": "The studies of plant trait/disease refer to the studies of visually observable patterns of a particular plant. Nowadays crops face many traits/diseases. Damage of the insect is one of the major trait/disease. Insecticides are not always proved efficient because insecticides may be toxic to some kind of birds. It also damages natural animal food chains. A common practice for plant scientists is to estimate the damage of plant (leaf, stem) because of disease by an eye on a scale based on percentage of affected area. It results in subjectivity and low throughput. This paper provides a advances in various methods used to study plant diseases/traits using image processing. The methods studied are for increasing throughput & reducing subjectiveness arising from human experts in detecting the plant diseases.",
"title": ""
}
] |
[
{
"docid": "a9ea1f1f94a26181addac948837c3030",
"text": "Crime tends to clust er geographi cally. This has led to the wide usage of hotspot analysis to identify and visualize crime. Accurately identified crime hotspots can greatly benefit the public by creating accurate threat visualizations, more efficiently allocating police resources, and predicting crime. Yet existing mapping methods usually identify hotspots without considering the underlying correlates of crime. In this study, we introduce a spatial data mining framework to study crime hotspots through their related variables. We use Geospatial Discriminative Patterns (GDPatterns) to capture the significant difference between two classes (hotspots and normal areas) in a geo-spatial dataset. Utilizing GDPatterns, we develop a novel model—Hotspot Optimization Tool (HOT)—to improve the identification of crime hotspots. Finally, based on a similarity measure, we group GDPattern clusters and visualize the distribution and characteristics of crime related variables. We evaluate our approach using a real world dataset collected from a northeast city in the United States. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8f91beade67a248cc0c063db42caabec",
"text": "c:nt~ now, true videwon-dernaad can ody be atievsd hg a dedicated data flow for web service request. This brute force approach is probibitivdy &\\Tensive. Using mtiticast w si@cantly reduce the system rest. This solution, however, mu~t dday services in order to serve many requ~s as a hztch. h this paper, we consider a third alternative ded Pat&ing. h our technique, an e*mg mtiticast m expand dynarnidy to serve new &ents. ~otig new &ents to join an existiig rutiticast improves the ficiency of the rntiti-.. ~hermor~ since W requ~s can be served immediatdy, the &ents experience no service dday md true vide+on-dem~d ~ be achieve~ A si~cant contribution of tkis work, is making mdtiwork for true vide~ on-demand ssrvicw. h fact, we are able to tiate the service latency and improve the efficiency of mtiticast at the same time To assms the ben~t of this sdetne, w perform simdations to compare its performance +th that of standard rntiti-. Our simtiation rats indicate convincingly that Patching offers .wbstanti~y better perforrnace.",
"title": ""
},
{
"docid": "add2f0b6aeb19e01ec4673b6f391cc61",
"text": "Accurate localization of landmarks in the vicinity of a robot is a first step towards solving the SLAM problem. In this work, we propose algorithms to accurately estimate the 3D location of the landmarks from the robot only from a single image taken from its on board camera. Our approach differs from previous efforts in this domain in that it first reconstructs accurately the 3D environment from a single image, then it defines a coordinate system over the environment, and later it performs the desired localization with respect to this coordinate system using the environment's features. The ground plane from the given image is accurately estimated and this precedes segmentation of the image into ground and vertical regions. A Markov Random Field (MRF) based 3D reconstruction is performed to build an approximate depth map of the given image. This map is robust against texture variations due to shadows, terrain differences, etc. A texture segmentation algorithm is also applied to determine the ground plane accurately. Once the ground plane is estimated, we use the respective camera's intrinsic and extrinsic calibration information to calculate accurate 3D information about the features in the scene.",
"title": ""
},
{
"docid": "8a42bc2dec684cf087d19bbbd2e815f8",
"text": "Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us fullcircle to Goffman—blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram—an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.",
"title": ""
},
{
"docid": "ce9084c2ac96db6bca6ddebe925c3d42",
"text": "Tactical driving decision making is crucial for autonomous driving systems and has attracted considerable interest in recent years. In this paper, we propose several practical components that can speed up deep reinforcement learning algorithms towards tactical decision making tasks: 1) nonuniform action skipping as a more stable alternative to action-repetition frame skipping, 2) a counterbased penalty for lanes on which ego vehicle has less right-of-road, and 3) heuristic inference-time action masking for apparently undesirable actions. We evaluate the proposed components in a realistic driving simulator and compare them with several baselines. Results show that the proposed scheme provides superior performance in terms of safety, efficiency, and comfort.",
"title": ""
},
{
"docid": "4f0d34e830387947f807213599d47652",
"text": "An essential feature of large scale free graphs, such as the Web, protein-to-protein interaction, brain connectivity, and social media graphs, is that they tend to form recursive communities. The latter are densely connected vertex clusters exhibiting quick local information dissemination and processing. Under the fuzzy graph model vertices are fixed while each edge exists with a given probability according to a membership function. This paper presents Fuzzy Walktrap and Fuzzy Newman-Girvan, fuzzy versions of two established community discovery algorithms. The proposed algorithms have been applied to a synthetic graph generated by the Kronecker model with different termination criteria and the results are discussed. Keywords-Fuzzy graphs; Membership function; Community detection; Termination criteria; Walktrap algorithm; NewmanGirvan algorithm; Edge density; Kronecker model; Large graph analytics; Higher order data",
"title": ""
},
{
"docid": "9a2a126eecb116f04b501028f92b7736",
"text": "Sleep bruxism (SB) is a common sleep-related motor disorder characterized by tooth grinding and clenching. SB diagnosis is made on history of tooth grinding and confirmed by polysomnographic recording of electromyographic (EMG) episodes in the masseter and temporalis muscles. The typical EMG activity pattern in patients with SB is known as rhythmic masticatory muscle activity (RMMA). The authors observed that most RMMA episodes occur in association with sleep arousal and are preceded by physiologic activation of the central nervous and sympathetic cardiac systems. This article provides a comprehensive review of the cause, pathophysiology, assessment, and management of SB.",
"title": ""
},
{
"docid": "c5bbb45cc61de12d0eac19d1e59752fb",
"text": "'No-shows' or missed appointments result in under-utilized clinic capacity. We develop a logistic regression model using electronic medical records to estimate patients' no-show probabilities and illustrate the use of the estimates in creating clinic schedules that maximize clinic capacity utilization while maintaining small patient waiting times and clinic overtime costs. This study used information on scheduled outpatient appointments collected over a three-year period at a Veterans Affairs medical center. The call-in process for 400 clinic days was simulated and for each day two schedules were created: the traditional method that assigned one patient per appointment slot, and the proposed method that scheduled patients according to their no-show probability to balance patient waiting, overtime and revenue. Combining patient no-show models with advanced scheduling methods would allow more patients to be seen a day while improving clinic efficiency. Clinics should consider the benefits of implementing scheduling software that includes these methods relative to the cost of no-shows.",
"title": ""
},
{
"docid": "3ad124875f073ff961aaf61af2832815",
"text": "EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT\na perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This \"action simulation for auditory prediction\" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.",
"title": ""
},
{
"docid": "37feedcb9e527601cb28fe59b2526ab3",
"text": "In this paper we present a covariance based tracking algorithm for intelligent video analysis to assist marine biologists in understanding the complex marine ecosystem in the Ken-Ding sub-tropical coral reef in Taiwan by processing underwater real-time videos recorded in open ocean. One of the most important aspects of marine biology research is the investigation of fish trajectories to identify events of interest such as fish preying, mating, schooling, etc. This task, of course, requires a reliable tracking algorithm able to deal with 1) the difficulties of following fish that have multiple degrees of freedom and 2) the possible varying conditions of the underwater environment. To accommodate these needs, we have developed a tracking algorithm that exploits covariance representation to describe the object’s appearance and statistical information and also to join different types of features such as location, color intensities, derivatives, etc. The accuracy of the algorithm was evaluated by using hand-labeled ground truth data on 30000 frames belonging to ten different videos, achieving an average performance of about 94%, estimated using multiple ratios that provide indication on how good is a tracking algorithm both globally (e.g. counting objects in a fixed range of time) and locally (e.g. in distinguish occlusions among objects).",
"title": ""
},
{
"docid": "a6e6cf1473adb05f33b55cb57d6ed6d3",
"text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.",
"title": ""
},
{
"docid": "8cdbbbfa00dfd08119e1802e9498df20",
"text": "Background:Cetuximab is the only targeted agent approved for the treatment of head and neck squamous cell carcinomas (HNSCC), but low response rates and disease progression are frequently reported. As the phosphoinositide 3-kinase (PI3K) and the mammalian target of rapamycin (mTOR) pathways have an important role in the pathogenesis of HNSCC, we investigated their involvement in cetuximab resistance.Methods:Different human squamous cancer cell lines sensitive or resistant to cetuximab were tested for the dual PI3K/mTOR inhibitor PF-05212384 (PKI-587), alone and in combination, both in vitro and in vivo.Results:Treatment with PKI-587 enhances sensitivity to cetuximab in vitro, even in the condition of epidermal growth factor receptor (EGFR) resistance. The combination of the two drugs inhibits cells survival, impairs the activation of signalling pathways and induces apoptosis. Interestingly, although significant inhibition of proliferation is observed in all cell lines treated with PKI-587 in combination with cetuximab, activation of apoptosis is evident in sensitive but not in resistant cell lines, in which autophagy is pre-eminent. In nude mice xenografted with resistant Kyse30 cells, the combined treatment significantly reduces tumour growth and prolongs mice survival.Conclusions:Phosphoinositide 3-kinase/mammalian target of rapamycin inhibition has an important role in the rescue of cetuximab resistance. Different mechanisms of cell death are induced by combined treatment depending on basal anti-EGFR responsiveness.",
"title": ""
},
{
"docid": "eb23e4dedc5444faff49fa46b9866a15",
"text": "People with severe neurological impairments face many challenges in sensorimotor functions and communication with the environment; therefore they have increased demand for advanced, adaptive and personalized rehabilitation. During the last several decades, numerous studies have developed brain-computer interfaces (BCIs) with the goals ranging from providing means of communication to functional rehabilitation. Here we review the research on non-invasive, electroencephalography (EEG)-based BCI systems for communication and rehabilitation. We focus on the approaches intended to help severely paralyzed and locked-in patients regain communication using three different BCI modalities: slow cortical potentials, sensorimotor rhythms and P300 potentials, as operational mechanisms. We also review BCI systems for restoration of motor function in patients with spinal cord injury and chronic stroke. We discuss the advantages and limitations of these approaches and the challenges that need to be addressed in the future.",
"title": ""
},
{
"docid": "97c81cfa85ff61b999ae8e565297a16e",
"text": "This paper describes the complete implementation of a blind image denoising algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD) noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and on scans of old photographs. Source Code The source code (ANSI C), its documentation, and the online demo are accessible at the IPOL web page of this article1.",
"title": ""
},
{
"docid": "44e28ba2149dce27fd0ccc9ed2065feb",
"text": "Flip chip assembly technology is an attractive solution for high I/O density and fine-pitch microelectronics packaging. Recently, high efficient GaN-based light-emitting diodes (LEDs) have undergone a rapid development and flip chip bonding has been widely applied to fabricate high-brightness GaN micro-LED arrays [1]. The flip chip GaN LED has some advantages over the traditional top-emission LED, including improved current spreading, higher light extraction efficiency, better thermal dissipation capability and the potential of further optical component integration [2, 3]. With the advantages of flip chip assembly, micro-LED (μLED) arrays with high I/O density can be performed with improved luminous efficiency than conventional p-side-up micro-LED arrays and are suitable for many potential applications, such as micro-displays, bio-photonics and visible light communications (VLC), etc. In particular, μLED array based selif-emissive micro-display has the promising to achieve high brightness and contrast, reliability, long-life and compactness, which conventional micro-displays like LCD, OLED, etc, cannot compete with. In this study, GaN micro-LED array device with flip chip assembly package process was presented. The bonding quality of flip chip high density micro-LED array is tested by daisy chain test. The p-n junction tests of the devices are measured for electrical characteristics. The illumination condition of each micro-diode pixel was examined under a forward bias. Failure mode analysis was performed using cross sectioning and scanning electron microscopy (SEM). Finally, the fully packaged micro-LED array device is demonstrated as a prototype of dice projector system.",
"title": ""
},
{
"docid": "247534c6b5416e4330a84e10daf2bc0c",
"text": "The aim of the present study was to determine metabolic responses, movement patterns and distance covered at running speeds corresponding to fixed blood lactate concentrations (FBLs) in young soccer players during a match play. A further aim of the study was to evaluate the relationships between FBLs, maximal oxygen consumption (VO2max) and distance covered during a game. A multistage field test was administered to 32 players to determine FBLs and VO2max. Blood lactate (LA), heart rate (HR) and rate of perceived exertion (RPE) responses were obtained from 36 players during tournament matches filmed using six fixed cameras. Images were transferred to a computer, for calibration and synchronization. In all players, values for LA and HR were higher and RPE lower during the 1(st) half compared to the 2(nd) half of the matches (p < 0.01). Players in forward positions had higher LA levels than defenders, but HR and RPE values were similar between playing positions. Total distance and distance covered in jogging, low-moderate-high intensity running and low intensity sprint were higher during the 1(st) half (p < 0.01). In the 1(st) half, players also ran longer distances at FBLs [p<0.01; average running speed at 2mmol·L(-1) (FBL2): 3.32 ± 0.31m·s(-1) and average running speed at 4mmol·L(-1) (FBL4): 3.91 ± 0.25m·s(-1)]. There was a significant difference between playing positions in distance covered at different running speeds (p < 0.05). However, when distance covered was expressed as FBLs, the players ran similar distances. In addition, relationships between FBLs and total distance covered were significant (r = 0.482 to 0.570; p < 0.01). In conclusion, these findings demonstrated that young soccer players experienced higher internal load during the 1(st) half of a game compared to the 2(nd) half. Furthermore, although movement patterns of players differed between playing positions, all players experienced a similar physiological stress throughout the game. Finally, total distance covered was associated to fixed blood lactate concentrations during play. Key pointsBased on LA, HR and RPE responses, young top soccer players experienced a higher physiological stress during the 1(st) half of the matches compared to the 2(nd) half.Movement patterns differed in accordance with the players' positions but that all players experienced a similar physiological stress during match play.Approximately one quarter of total distance was covered at speeds that exceeded the 4 mmol·L(-1) fixed LA threshold.Total distance covered was influenced by running speeds at fixed lactate concentrations in young soccer players during match play.",
"title": ""
},
{
"docid": "7177503e5a6dffcaab46009673af5eed",
"text": "This paper describes a heart attack self-test application for a mobile phone that allows potential victims, without the intervention of a medical specialist, to quickly assess whether they are having a heart attack. Heart attacks can occur anytime and anywhere. Using pervasive technology such as a mobile phone and a small wearable ECG sensor it is possible to collect the user's symptoms and to detect the onset of a heart attack by analysing the ECG recordings. If the application assesses that the user is at risk, it will urge the user to call the emergency services immediately. If the user has a cardiac arrest the application will automatically determine the current location of the user and alert the ambulance services and others to the person's location.",
"title": ""
},
{
"docid": "5dc78e62ca88a6a5f253417093e2aa4d",
"text": "This paper surveys the scientific and trade literature on cybersecurity for unmanned aerial vehicles (UAV), concentrating on actual and simulated attacks, and the implications for small UAVs. The review is motivated by the increasing use of small UAVs for inspecting critical infrastructures such as the electric utility transmission and distribution grid, which could be a target for terrorism. The paper presents a modified taxonomy to organize cyber attacks on UAVs and exploiting threats by Attack Vector and Target. It shows that, by Attack Vector, there has been one physical attack and ten remote attacks. By Target, there have been six attacks on GPS (two jamming, four spoofing), two attacks on the control communications stream (a deauthentication attack and a zero-day vulnerabilities attack), and two attacks on data communications stream (two intercepting the data feed, zero executing a video replay attack). The paper also divides and discusses the findings by large or small UAVs, over or under 25 kg, but concentrates on small UAVs. The survey concludes that UAV-related research to counter cybersecurity threats focuses on GPS Jamming and Spoofing, but ignores attacks on the controls and data communications stream. The gap in research on attacks on the data communications stream is concerning, as an operator can see a UAV flying off course due to a control stream attack but has no way of detecting a video replay attack (substitution of a video feed).",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
},
{
"docid": "0b86a006b1f8e3a5e940daef25fe7d58",
"text": "While drug toxicity (especially hepatotoxicity) is the most frequent reason cited for withdrawal of an approved drug, no simple solution exists to adequately predict such adverse events. Simple cytotoxicity assays in HepG2 cells are relatively insensitive to human hepatotoxic drugs in a retrospective analysis of marketed pharmaceuticals. In comparison, a panel of pre-lethal mechanistic cellular assays hold the promise to deliver a more sensitive approach to detect endpoint-specific drug toxicities. The panel of assays covered by this review includes steatosis, cholestasis, phospholipidosis, reactive intermediates, mitochondria membrane function, oxidative stress, and drug interactions. In addition, the use of metabolically competent cells or the introduction of major human hepatocytes in these in vitro studies allow a more complete picture of potential drug side effect. Since inter-individual therapeutic index (TI) may differ from patient to patient, the rational use of one or more of these cellular assay and targeted in vivo exposure data may allow pharmaceutical scientists to select drug candidates with a higher TI potential in the drug discovery phase.",
"title": ""
}
] |
scidocsrr
|
696b6ce7b4804a4acbcc660af91cf0ac
|
Automatic Headline Generation for Newspaper Stories
|
[
{
"docid": "f54792cef073ad6f8828a247317ced2f",
"text": "We describe a parsing system based upon a language model for English that is, in turn, based upon assigning probabilities to possible parses for a sentence. This model is used in a parsing system by nding the parse for the sentence with the highest probability. This system outperforms previous schemes. As this is the third in a series of parsers by di erent authors that are similar enough to invite detailed comparisons but di erent enough to give rise to di erent levels of performance, we also report on some experiments designed to identify what aspects of these systems best explain their relative performance.",
"title": ""
},
{
"docid": "6643797b32fa04bc652940188c3c6e0c",
"text": "In neural text generation such as neural machine translation, summarization, and image captioning, beam search is widely used to improve the output text quality. However, in the neural generation setting, hypotheses can finish in different steps, which makes it difficult to decide when to end beam search to ensure optimality. We propose a provably optimal beam search algorithm that will always return the optimal-score complete hypothesis (modulo beam size), and finish as soon as the optimality is established (finishing no later than the baseline). To counter neural generation’s tendency for shorter hypotheses, we also introduce a bounded length reward mechanism which allows a modified version of our beam search algorithm to remain optimal. Experiments on neural machine translation demonstrate that our principled beam search algorithm leads to improvement in BLEU score over previously proposed alternatives.",
"title": ""
}
] |
[
{
"docid": "71ab7077d997910a61e98a2ea53198ba",
"text": "We describe the system developed for the CoNLL-2013 shared task—automatic English L2 grammar error correction. The system is based on the rule-based approach. It uses very few additional resources: a morphological analyzer and a list of 250 common uncountable nouns, along with the training data provided by the organizers. The system uses the syntactic information available in the training data: this information is represented as syntactic n-grams, i.e. n-grams extracted by following the paths in dependency trees. The system is simple and was developed in a short period of time (1 month). Since it does not employ any additional resources or any sophisticated machine learning methods, it does not achieve high scores (specifically, it has low recall) but could be considered as a baseline system for the task. On the other hand, it shows what can be obtained using a simple rule-based approach and presents a few situations where the rule-based approach can perform better than ML ap-",
"title": ""
},
{
"docid": "4ab6403e073b58d55beecbe7aada02be",
"text": "In this paper we present an approach to creating educational programming computer games. We combine the CSI interpreter with Unity 3D to enable a player to enter program statements that can alter the game world while it is being played. We also create wrapper methods that encapsulate complex Unity C# tasks into easy to use helper functions. The results from our initial experimentation show that the technique offers a promising approach to educational programming games.",
"title": ""
},
{
"docid": "2359295e109766126c5427b71031f4a0",
"text": "Recently, aggressive voltage scaling was shown as an important technique in achieving highly energy-efficient circuits. Specifically, scaling Vdd to near or sub-threshold regions was proposed for energy-constrained sensor systems to enable long lifetime and small system volume [1][2][4]. However, energy efficiency degrades below a certain voltage, Vmin, due to rapidly increasing leakage energy consumption, setting a fundamental limit on the achievable energy efficiency. In addition, voltage scaling degrades performance and heightens delay variability due to large Id sensitivity to PVT variations in the ultra-low voltage (ULV) regime. This paper uses circuit and architectural methods to further reduce the minimum energy point, or Emin, and establish a new lower limit on energy efficiency, while simultaneously improving performance and robustness. The approaches are demonstrated on an FFT core in 65nm CMOS.",
"title": ""
},
{
"docid": "cb59a7493f6b9deee4691e6f97c93a1f",
"text": "AIMS AND OBJECTIVES\nThis integrative review of the literature addresses undergraduate nursing students' attitudes towards and use of research and evidence-based practice, and factors influencing this. Current use of research and evidence within practice, and the influences and perceptions of students in using these tools in the clinical setting are explored.\n\n\nBACKGROUND\nEvidence-based practice is an increasingly critical aspect of quality health care delivery, with nurses requiring skills in sourcing relevant information to guide the care they provide. Yet, barriers to engaging in evidence-based practice remain. To increase nurses' use of evidence-based practice within healthcare settings, the concepts and skills required must be introduced early in their career. To date, however, there is little evidence to show if and how this inclusion makes a difference.\n\n\nDESIGN\nIntegrative literature review.\n\n\nMETHODS\nProQuest, Summon, Science Direct, Ovid, CIAP, Google scholar and SAGE databases were searched, and Snowball search strategies used. One hundred and eighty-one articles were reviewed. Articles were then discarded for irrelevance. Nine articles discussed student attitudes and utilisation of research and evidence-based practice.\n\n\nRESULTS\nFactors surrounding the attitudes and use of research and evidence-based practice were identified, and included the students' capability beliefs, the students' attitudes, and the attitudes and support capabilities of wards/preceptors.\n\n\nCONCLUSIONS\nUndergraduate nursing students are generally positive toward using research for evidence-based practice, but experience a lack of support and opportunity. These students face cultural and attitudinal disadvantage, and lack confidence to practice independently. Further research and collaboration between educational facilities and clinical settings may improve utilisation.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThis paper adds further discussion to the topic from the perspective of and including influences surrounding undergraduate students and new graduate nurses.",
"title": ""
},
{
"docid": "59084b05271efe4b22dd490958622c1e",
"text": "Millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) seamlessly integrates two wireless technologies, mmWave communications and massive MIMO, which provides spectrums with tens of GHz of total bandwidth and supports aggressive space division multiple access using large-scale arrays. Though it is a promising solution for next-generation systems, the realization of mmWave massive MIMO faces several practical challenges. In particular, implementing massive MIMO in the digital domain requires hundreds to thousands of radio frequency chains and analog-to-digital converters matching the number of antennas. Furthermore, designing these components to operate at the mmWave frequencies is challenging and costly. These motivated the recent development of the hybrid-beamforming architecture, where MIMO signal processing is divided for separate implementation in the analog and digital domains, called the analog and digital beamforming, respectively. Analog beamforming using a phase array introduces uni-modulus constraints on the beamforming coefficients. They render the conventional MIMO techniques unsuitable and call for new designs. In this paper, we present a systematic design framework for hybrid beamforming for multi-cell multiuser massive MIMO systems over mmWave channels characterized by sparse propagation paths. The framework relies on the decomposition of analog beamforming vectors and path observation vectors into Kronecker products of factors being uni-modulus vectors. Exploiting properties of Kronecker mixed products, different factors of the analog beamformer are designed for either nulling interference paths or coherently combining data paths. Furthermore, a channel estimation scheme is designed for enabling the proposed hybrid beamforming. The scheme estimates the angles-of-arrival (AoA) of data and interference paths by analog beam scanning and data-path gains by analog beam steering. The performance of the channel estimation scheme is analyzed. In particular, the AoA spectrum resulting from beam scanning, which displays the magnitude distribution of paths over the AoA range, is derived in closed form. It is shown that the inter-cell interference level diminishes inversely with the array size, the square root of pilot sequence length, and the spatial separation between paths, suggesting different ways of tackling pilot contamination.",
"title": ""
},
{
"docid": "df56d2914cdfbc31dff9ecd9a3093379",
"text": "In this paper, square slot (SS) upheld by the substrate integrated waveguide (SIW) cavity is presented. A simple 50 Ω microstrip line is employed to feed this cavity. Then slot matched cavity modes are coupled to the slot and radiated efficiently. The proposed antenna features the following structural advantages, compact size, light weight and easy low cost fabrication. Concerning the electrical performance, it exhibits 15% impedance bandwidth for the reflection coefficient less than -10 dB and the realized gain touches 8.5 dB frontier.",
"title": ""
},
{
"docid": "7e1608bfd1f0256d0873de4f54ce6bfb",
"text": "A fully integrated system for the automatic detection and characterization of cracks in road flexible pavement surfaces, which does not require manually labeled samples, is proposed to minimize the human subjectivity resulting from traditional visual surveys. The first task addressed, i.e., crack detection, is based on a learning from samples paradigm, where a subset of the available image database is automatically selected and used for unsupervised training of the system. The system classifies nonoverlapping image blocks as either containing crack pixels or not. The second task deals with crack type characterization, for which another classification system is constructed, to characterize the detected cracks' connect components. Cracks are labeled according to the types defined in the Portuguese Distress Catalog, with each different crack present in a given image receiving the appropriate label. Moreover, a novel methodology for the assignment of crack severity levels is introduced, computing an estimate for the width of each detected crack. Experimental crack detection and characterization results are presented based on images captured during a visual road pavement surface survey over Portuguese roads, with promising results. This is shown by the quantitative evaluation methodology introduced for the evaluation of this type of system, including a comparison with human experts' manual labeling results.",
"title": ""
},
{
"docid": "41a3a4174a0fade6fb96ade0294c3eda",
"text": "Recent development in fully convolutional neural network enables efficient end-to-end learning of semantic segmentation. Traditionally, the convolutional classifiers are taught to learn the representative semantic features of labeled semantic objects. In this work, we propose a reverse attention network (RAN) architecture that trains the network to capture the opposite concept (i.e., what are not associated with a target class) as well. The RAN is a three-branch network that performs the direct, reverse and reverse-attention learning processes simultaneously. Extensive experiments are conducted to show the effectiveness of the RAN in semantic segmentation. Being built upon the DeepLabv2-LargeFOV, the RAN achieves the state-of-the-art mean IoU score (48.1%) for the challenging PASCAL-Context dataset. Significant performance improvements are also observed for the PASCAL-VOC, Person-Part, NYUDv2 and ADE20K datasets.",
"title": ""
},
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
},
{
"docid": "a88809760ba85afd558d4dd076a4dec8",
"text": "Traditional web search engines treat queries as sequences of keywords and return web pages that contain those keywords as results. Such a mechanism is effective when the user knows exactly the right words that web pages use to describe the content they are looking for. However, it is less than satisfactory or even downright hopeless if the user asks for a concept or topic that has broader and sometimes ambiguous meanings. This is because keyword-based search engines index web pages by keywords and not by concepts or topics. In fact they do not understand the content of the web pages. In this paper, we present a framework that improves web search experiences through the use of a probabilistic knowledge base. The framework classifies web queries into different patterns according to the concepts and entities in addition to keywords contained in these queries. Then it produces answers by interpreting the queries with the help of the knowledge base. Our preliminary results showed that the new framework is capable of answering various types of topic-like queries with much higher user satisfaction, and is therefore a valuable addition to the traditional web search.",
"title": ""
},
{
"docid": "bde1d85da7f1ac9c9c30b0fed448aac6",
"text": "We survey temporal description logics that are based on standard temporal logics such as LTL and CTL. In particular, we concentrate on the computational complexity of the satisfiability problem and algorithms for deciding it.",
"title": ""
},
{
"docid": "60b3460f1ae554c6d24b9b982484d0c1",
"text": "Archaeological remote sensing is not a novel discipline. Indeed, there is already a suite of geoscientific techniques that are regularly used by practitioners in the field, according to standards and best practice guidelines. However, (i) the technological development of sensors for data capture; (ii) the accessibility of new remote sensing and Earth Observation data; and (iii) the awareness that a combination of different techniques can lead to retrieval of diverse and complementary information to characterize landscapes and objects of archaeological value and significance, are currently three triggers stimulating advances in methodologies for data acquisition, signal processing, and the integration and fusion of extracted information. The Special Issue “Remote Sensing and Geosciences for Archaeology” therefore presents a collection of scientific contributions that provides a sample of the state-of-the-art and forefront research in this field. Site discovery, understanding of cultural landscapes, augmented knowledge of heritage, condition assessment, and conservation are the main research and practice targets that the papers published in this Special Issue aim to address.",
"title": ""
},
{
"docid": "7ea89697894cb9e0da5bfcebf63be678",
"text": "This paper develops a frequency-domain iterative machine learning (IML) approach for output tracking. Frequency-domain iterative learning control allows bounded noncausal inversion of system dynamics and is, therefore, applicable to nonminimum phase systems. The model used in the frequency-domain control update can be obtained from the input–output data acquired during the iteration process. However, such data-based approaches can have challenges if the noise-to-output-signal ratio is large. The main contribution of this paper is the use of kernel-based machine learning during the iterations to estimate both the model (and its inverse) for the control update, as well as the model uncertainty needed to establish bounds on the iteration gain for ensuring convergence. Another contribution is the proposed use of augmented inputs with persistency of excitation to promote learning of the model during iterations. The improved model can be used to better infer the inverse input resulting in lower initial error for new output trajectories. The proposed IML approach with the augmented input is illustrated with simulations for a benchmark nonminimum phase example.",
"title": ""
},
{
"docid": "308e828228f6ed186b990dd2611dd67a",
"text": "Smartwatch has become one of the most popular wearable computers on the market. We conduct an IRB-approved measurement study involving 27 Android smartwatch users. Using a 106-day dataset collected from our participants, we perform in-depth characterization of three key aspects of smartwatch usage \"in the wild\": usage patterns, energy consumption, and network traffic. Based on our findings, we identify key aspects of the smartwatch ecosystem that can be further improved, propose recommendations, and point out future research directions.",
"title": ""
},
{
"docid": "cbad7caa1cc1362e8cd26034617c39f4",
"text": "Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads. In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an \"evaluation process\" that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.",
"title": ""
},
{
"docid": "49a645d8d1c160a445a15a2dfd142a7f",
"text": "Currently, 4G network becomes commercial in large scale around the world and the industry has started the fifth-generation mobile communication technology (5G) research. Compared to 4G network, 5G network will support larger mobility as well as higher transmission rate, higher user experience rate, energy efficiency, spectrum efficiency and so forth. All of these will boost a variety of multimedia services, especially for Over-The-Top (OTT) services. So for, OTT services have already gained great popularity and contributed to large traffic consumption, which propose a challenge for operators. As OTT services are designed to deliver over the best effort Internet, the QoE management solutions for traditional multimedia services are obsolete, which propose new challenges in QOE management aspects for network and service providers, especially for the 4G and future 5G network. This paper attempts to present the technical challenges faced by 5G network from QoE management perspective of OTT services. Our objective is to enhance the user experience of OTT services and improve network efficiency. We analysis the characteristics and QoE factors of OTT services over 5G wireless network. With the QoE factors and current QoE management situation, we summarize OTT services QoE quantification and evaluation methods, present QoE-driven radio resource management and optimization solutions. Then, we propose a framework and whole evaluation procedure which aim at obtaining the accurate user experience value as well as improving network efficiency and optimizing the user experience.",
"title": ""
},
{
"docid": "0de4fb7e390aab6ebf446bc07118c1d9",
"text": "When using a mathematical formula for search (query-by-expression), the suitability of retrieved formulae often depends more upon symbol identities and layout than deep mathematical semantics. Using a Symbol Layout Tree representation for formula appearance, we propose the Maximum Subtree Similarity (MSS) for ranking formulae based upon the subexpression whose symbols and layout best match a query formula. Because MSS is too expensive to apply against a complete collection, the Tangent-3 system first retrieves expressions using an inverted index over symbol pair relationships, ranking hits using the Dice coefficient; the top-k formulae are then re-ranked by MSS. Tangent-3 obtains state-of-the-art performance on the NTCIR-11 Wikipedia formula retrieval benchmark, and is efficient in terms of both space and time. Retrieval systems for other graphical forms, including chemical diagrams, flowcharts, figures, and tables, may benefit from adopting this approach.",
"title": ""
},
{
"docid": "23832f031f7c700f741843e54ff81b4e",
"text": "Data Mining in medicine is an emerging field of great importance to provide a prognosis and deeper understanding of disease classification, specifically in Mental Health areas. The main objective of this paper is to present a review of the existing research works in the literature, referring to the techniques and algorithms of Data Mining in Mental Health, specifically in the most prevalent diseases such as: Dementia, Alzheimer, Schizophrenia and Depression. Academic databases that were used to perform the searches are Google Scholar, IEEE Xplore, PubMed, Science Direct, Scopus and Web of Science, taking into account as date of publication the last 10 years, from 2008 to the present. Several search criteria were established such as ‘techniques’ AND ‘Data Mining’ AND ‘Mental Health’, ‘algorithms’ AND ‘Data Mining’ AND ‘dementia’ AND ‘schizophrenia’ AND ‘depression’, etc. selecting the papers of greatest interest. A total of 211 articles were found related to techniques and algorithms of Data Mining applied to the main Mental Health diseases. 72 articles have been identified as relevant works of which 32% are Alzheimer’s, 22% dementia, 24% depression, 14% schizophrenia and 8% bipolar disorders. Many of the papers show the prediction of risk factors in these diseases. From the review of the research articles analyzed, it can be said that use of Data Mining techniques applied to diseases such as dementia, schizophrenia, depression, etc. can be of great help to the clinical decision, diagnosis prediction and improve the patient’s quality of life.",
"title": ""
},
{
"docid": "e29774fe6bd529b769faca8e54202be1",
"text": "The main objective of this research is to develop a n Intelligent System using data mining modeling tec hnique, namely, Naive Bayes. It is implemented as web based applica tion in this user answers the predefined questions. It retrieves hidden data from stored database and compares the u er values with trained data set. It can answer com plex queries for diagnosing heart disease and thus assist healthcare practitioners to make intelligent clinical decisio ns which traditional decision support systems cannot. By providing effec tiv treatments, it also helps to reduce treatment cos s. Keyword: Data mining Naive bayes, heart disease, prediction",
"title": ""
}
] |
scidocsrr
|
4acc57f179240e4f07f700f4b8000b86
|
FlexSense: a transparent self-sensing deformable surface
|
[
{
"docid": "da82703c415d036320a80e1ec435e327",
"text": "In this paper, we present a novel device concept that features double-sided displays which can be folded using predefined hinges. The device concept enables users to dynamically alter both size and shape of the display and also to access the backside using fold gestures. We explore the design of such devices by investigating different types and forms of folding. Furthermore, we propose a set of interaction principles and techniques. Following a user-centered design process, we evaluate our device concept in two sessions with low-fidelity and high-fidelity prototypes.",
"title": ""
}
] |
[
{
"docid": "97353be7c54dd2ded69815bf93545793",
"text": "In recent years, with the rapid development of deep learning, it has achieved great success in the field of image recognition. In this paper, we applied the convolution neural network (CNN) on supermarket commodity identification, contributing to the study of supermarket commodity identification. Different from the QR code identification of supermarket commodity, our work applied the CNN using the collected images of commodity as input. This method has the characteristics of fast and non-contact. In this paper, we mainly did the following works: 1. Collected a small dataset of supermarket goods. 2. Built Different convolutional neural network frameworks in caffe and trained the dataset using the built networks. 3. Improved train methods by finetuning the trained model.",
"title": ""
},
{
"docid": "c630b600a0b03e9e3ede1c0132f80264",
"text": "68 AI MAGAZINE Adaptive graphical user interfaces (GUIs) automatically tailor the presentation of functionality to better fit an individual user’s tasks, usage patterns, and abilities. A familiar example of an adaptive interface is the Windows XP start menu, where a small set of applications from the “All Programs” submenu is replicated in the top level of the “Start” menu for easier access, saving users from navigating through multiple levels of the menu hierarchy (figure 1). The potential of adaptive interfaces to reduce visual search time, cognitive load, and motor movement is appealing, and when the adaptation is successful an adaptive interface can be faster and preferred in comparison to a nonadaptive counterpart (for example, Gajos et al. [2006], Greenberg and Witten [1985]). In practice, however, many challenges exist, and, thus far, evaluation results of adaptive interfaces have been mixed. For an adaptive interface to be successful, the benefits of correct adaptations must outweigh the costs, or usability side effects, of incorrect adaptations. Often, an adaptive mechanism designed to improve one aspect of the interaction, typically motor movement or visual search, inadvertently increases effort along another dimension, such as cognitive or perceptual load. The result is that many adaptive designs that were expected to confer a benefit along one of these dimensions have failed in practice. For example, a menu that tracks how frequently each item is used and adaptively reorders itself so that items appear in order from most to least frequently accessed should improve motor performance, but in reality this design can slow users down and reduce satisfaction because of the constantly changing layout (Mitchell and Schneiderman [1989]; for example, figure 2b). Commonly cited issues with adaptive interfaces include the lack of control the user has over the adaptive process and the difficulty that users may have in predicting what the system’s response will be to a user action (Höök 2000). User evaluation of adaptive GUIs is more complex than eval-",
"title": ""
},
{
"docid": "a25bd124c29b9ca41f794e327d822a91",
"text": "SUMO is an open source traffic simulation package including the simulation application itself as well as supporting tools, mainly for network import and demand modeling. SUMO helps to investigate a large variety of research topics, mainly in the context of traffic management and vehicular communications. We describe the current state of the package, its major applications, both by research topic and by example, as well as future developments and extensions. Keywords-microscopic traffic simulation; traffic management; open source; software",
"title": ""
},
{
"docid": "d5044108f375f045ce627b7c9ef389ac",
"text": "Most current information management systems can be classifi ed into text retrieval systems, relational/object database systems, or semistructured/XML database systems . However, in practice, many applications data sets involve a combination of free text, structured dat a, nd semistructured data. Hence, integration of different types of information management systems has be en, and continues to be, an active research topic. In this paper, we present a short survey of prior work o n integrating and inter-operating between text, structured, and semistructured database systems. We classify existing literature based on the kinds of systems being integrated and the approach to integration . Based on this classification, we identify the challenges and the key themes underlying existing work in th is area.",
"title": ""
},
{
"docid": "f3f441c2cf1224746c0bfbb6ce02706d",
"text": "This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.",
"title": ""
},
{
"docid": "bb408cedbb0fc32f44326eff7a7390f7",
"text": "A fully integrated SONET OC-192 transmitter IC using a standard CMOS process consists of an input data register, FIFO, CMU, and 16:1 multiplexer to give a 10Gb/s serial output. A higher FEC rate, 10.7Gb/s, is supported. This chip, using a 0.18/spl mu/m process, exceeds SONET requirements, dissipating 450mW.",
"title": ""
},
{
"docid": "79c35abdd2a3a37782dd63ea6df6e95e",
"text": "Heart disease is one of the main sources of demise around the world and it is imperative to predict the disease at a premature phase. The computer aided systems help the doctor as a tool for predicting and diagnosing heart disease. The objective of this review is to widespread about Heart related cardiovascular disease and to brief about existing decision support systems for the prediction and diagnosis of heart disease supported by data mining and hybrid intelligent techniques .",
"title": ""
},
{
"docid": "6097315ac2e4475e8afd8919d390babf",
"text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.",
"title": ""
},
{
"docid": "6be74aa3f89b9e6944d8ffeb499fb4fa",
"text": "Data replication is a key technology in distributed systems that enables higher availability and performance. This article surveys optimistic replication algorithms. They allow replica contents to diverge in the short term to support concurrent work practices and tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular.Optimistic replication deploys algorithms not seen in traditional “pessimistic” systems. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen, and reaches agreement on the final contents incrementally.We explore the solution space for optimistic replication algorithms. This article identifies key challenges facing optimistic replication systems---ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence---and provides a comprehensive survey of techniques developed for addressing these challenges.",
"title": ""
},
{
"docid": "c03bbb5685790a4e47960789b3d11a35",
"text": "Multi-class segmentation of vertebrae is a non-trivial task mainly due to the high correlation in the appearance of adjacent vertebrae. Hence, such a task calls for the consideration of both global and local context. Based on this motivation, we propose a two-staged approach that, given a computed tomography dataset of the spine, segments the five lumbar vertebrae and simultaneously labels them. The first stage employs a multi-layered perceptron performing non-linear regression for locating the lumbar region using the global context. The second stage, comprised of a fully-convolutional deep network, exploits the local context in the localised lumbar region to segment and label the lumbar vertebrae in one go. Aided with practical data augmentation for training, our approach is highly generalisable, capable of successfully segmenting both healthy and abnormal vertebrae (fractured and scoliotic spines). We consistently achieve an average Dice coefficient of over 90% on a publicly available dataset of the xVertSeg segmentation challenge of MICCAI‘16. This is particularly noteworthy because the xVertSeg dataset is beset with severe deformities in the form of vertebral fractures and scoliosis.",
"title": ""
},
{
"docid": "7056b8e792a2bd1535cf020b2aeab2c7",
"text": "The authors propose a theoretical model linking achievement goals and achievement emotions to academic performance. This model was tested in a prospective study with undergraduates (N 213), using exam-specific assessments of both goals and emotions as predictors of exam performance in an introductory-level psychology course. The findings were consistent with the authors’ hypotheses and supported all aspects of the proposed model. In multiple regression analysis, achievement goals (mastery, performance approach, and performance avoidance) were shown to predict discrete achievement emotions (enjoyment, boredom, anger, hope, pride, anxiety, hopelessness, and shame), achievement emotions were shown to predict performance attainment, and 7 of the 8 focal emotions were documented as mediators of the relations between achievement goals and performance attainment. All of these findings were shown to be robust when controlling for gender, social desirability, positive and negative trait affectivity, and scholastic ability. The results are discussed with regard to the underdeveloped literature on discrete achievement emotions and the need to integrate conceptual and applied work on achievement goals and achievement emotions.",
"title": ""
},
{
"docid": "a3628ca53dfbe7b3e10593cc361cdaac",
"text": "In order to ensure the safe supply of the drinking water the quality needs to be monitor in real time. In this paper we present a design and development of a low cost system for real time monitoring of the water quality in IOT(internet of things).the system consist of several sensors is used to measuring physical and chemical parameters of the water. The parameters such as temperature, PH, turbidity, conductivity, dissolved oxygen of the water can be measured. The measured values from the sensors can be processed by the core controller. The raspberry PI B+ model can be used as a core controller. Finally, the sensor data can be viewed on internet using cloud computing.",
"title": ""
},
{
"docid": "eb26d1188d2c85cdca5823874b4a9da2",
"text": "System and application availability continues to be a fundamental characteristic of IT services. In recent years the IT Operations team at Wolters Kluwer CT Corporation has placed special focus on this area. Using a combination of goals, metrics, processes, organizational models, communication methods, corrective maintenance, root cause analysis, preventative engineering, automated alerting, and workflow automation significant progress has been made in meeting availability SLAs or Service Level Agreements. This paper presents the background of this work, approach, details of its implementation, and results. A special focus is provided on the use of a classical ITIL view as operationalized in an Agile and DevOps environment. Keywords: System Availability, Software Reliability, ITIL, Workflow Automation, Process Engineering, Production Support, Customer Support, Product Support, Change Management, Release Management, Incident Management, Problem Management, Organizational Design, Scrum, Agile, DevOps, Service Level Agreements, Software Measurement, Microsoft SharePoint.",
"title": ""
},
{
"docid": "b16d8dddf037e60ba9121f85e7d9b45a",
"text": "Bike sharing systems, aiming at providing the missing links in public transportation systems, are becoming popular in urban cities. A key to success for a bike sharing systems is the effectiveness of rebalancing operations, that is, the efforts of restoring the number of bikes in each station to its target value by routing vehicles through pick-up and drop-off operations. There are two major issues for this bike rebalancing problem: the determination of station inventory target level and the large scale multiple capacitated vehicle routing optimization with outlier stations. The key challenges include demand prediction accuracy for inventory target level determination, and an effective optimizer for vehicle routing with hundreds of stations. To this end, in this paper, we develop a Meteorology Similarity Weighted K-Nearest-Neighbor (MSWK) regressor to predict the station pick-up demand based on large-scale historic trip records. Based on further analysis on the station network constructed by station-station connections and the trip duration, we propose an inter station bike transition (ISBT) model to predict the station drop-off demand. Then, we provide a mixed integer nonlinear programming (MINLP) formulation of multiple capacitated bike routing problem with the objective of minimizing total travel distance. To solve it, we propose an Adaptive Capacity Constrained K-centers Clustering (AdaCCKC) algorithm to separate outlier stations (the demands of these stations are very large and make the optimization infeasible) and group the rest stations into clusters within which one vehicle is scheduled to redistribute bikes between stations. In this way, the large scale multiple vehicle routing problem is reduced to inner cluster one vehicle routing problem with guaranteed feasible solutions. Finally, the extensive experimental results on the NYC Citi Bike system show the advantages of our approach for bike demand prediction and large-scale bike rebalancing optimization.",
"title": ""
},
{
"docid": "e7fb4643c062e092a52ac84928ab46e9",
"text": "Object detection and tracking are main tasks in video surveillance systems. Extracting the background is an intensive task with high computational cost. This work proposes a hardware computing engine to perform background subtraction on low-cost field programmable gate arrays (FPGAs), focused on resource-limited environments. Our approach is based on the codebook algorithm and offers very low accuracy degradation. We have analyzed resource consumption and performance trade-offs in Spartan-3 FPGAs by Xilinx. In addition, an accuracy evaluation with standard benchmark sequences has been performed, obtaining better results than previous hardware approaches. The implementation is able to segment objects in sequences with resolution $$768\\times 576$$ at 50 fps using a robust and accurate approach, and an estimated power consumption of 5.13 W.",
"title": ""
},
{
"docid": "7004293690fe2fcc2e8880d08de83e7c",
"text": "Hidradenitis suppurativa (HS) is a challenging skin disease with limited therapeutic options. Obesity and metabolic syndrome are being increasingly implicated and associated with younger ages and greater metabolic severity. A 19-year-old female with an 8-year history of progressively debilitating cicatricial HS disease presented with obesity, profound anemia, leukocytosis, increased platelet count, hypoalbuminemia, and elevated liver enzymes. A combination of metformin, liraglutide, levonorgestrel-ethinyl estradiol, dapsone, and finasteride was initiated. Acute antibiotic use for recurrences and flares could be slowly discontinued. Over the course of 3 years on this regimen, the liver enzymes normalized in 1 year, followed in2 years by complete resolution of the majority of the hematological and metabolic abnormalities. The sedimentation rate reduced from over 120 to 34 mm/h. She required 1 surgical intervention for perianal disease after 9 months on the regimen. Flares greatly diminished in intensity and duration, with none in the past 6 months. Right axillary lesions have completely healed with residual disease greatly reduced. Chiefly abdominal lesions are persistent. She was able to complete high school from home, start a job, and resume a normal life. Initial weight loss of 40 pounds was not maintained. The current regimen is being well tolerated and continued.",
"title": ""
},
{
"docid": "a7e4dece49c3da2b271d3713be54fd81",
"text": "Cloud computing is a promising technology of the present and the future which uses the grid computing as its backbone. Cloud computing is the hottest topic of information and communication technology (ICT) for implementing it for individual, communities and business. It provides services such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). It provides many benefits like scalability and pay per use etc. Different deployment models are also available to fulfill the needs of the business and industry. Keywords— Cloud computing, Network, Model, Infrastructure, Platform",
"title": ""
},
{
"docid": "98cef46a572d3886c8a11fa55f5ff83c",
"text": "Deep convolutional neural networks (CNNs) have proven highly effective for visual recognition, where learning a universal representation from activations of convolutional layer plays a fundamental problem. In this paper, we present Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep architecture that quantizes the local activations of convolutional layer in a deep generative model, by training them in an end-to-end manner. To incorporate FV encoding strategy into deep generative models, we introduce Variational Auto-Encoder model, which steers a variational inference and learning in a neural network which can be straightforwardly optimized using standard stochastic gradient method. Different from the FV characterized by conventional generative models (e.g., Gaussian Mixture Model) which parsimoniously fit a discrete mixture model to data distribution, the proposed FV-VAE is more flexible to represent the natural property of data for better generalization. Extensive experiments are conducted on three public datasets, i.e., UCF101, ActivityNet, and CUB-200-2011 in the context of video action recognition and fine-grained image classification, respectively. Superior results are reported when compared to state-of-the-art representations. Most remarkably, our proposed FV-VAE achieves to-date the best published accuracy of 94.2% on UCF101.",
"title": ""
},
{
"docid": "af3faaf203d771bd7fae3363b8ec8060",
"text": "Recent advances on biometrics, information forensics, and security have improved the accuracy of biometric systems, mainly those based on facial information. However, an ever-growing challenge is the vulnerability of such systems to impostor attacks, in which users without access privileges try to authenticate themselves as valid users. In this work, we present a solution to video-based face spoofing to biometric systems. Such type of attack is characterized by presenting a video of a real user to the biometric system. To the best of our knowledge, this is the first attempt of dealing with video-based face spoofing based in the analysis of global information that is invariant to video content. Our approach takes advantage of noise signatures generated by the recaptured video to distinguish between fake and valid access. To capture the noise and obtain a compact representation, we use the Fourier spectrum followed by the computation of the visual rhythm and extraction of the gray-level co-occurrence matrices, used as feature descriptors. Results show the effectiveness of the proposed approach to distinguish between valid and fake users for video-based spoofing with near-perfect classification results.",
"title": ""
},
{
"docid": "e99a0c5a4660642caa4cb55c5e91cdb7",
"text": "We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.",
"title": ""
}
] |
scidocsrr
|
4594d2f085929dd9ae7bbe4f815a8a93
|
Next Generation Cloud Computing: New Trends and Research Directions
|
[
{
"docid": "56a35139eefd215fe83811281e4e2279",
"text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0ef173f7c32074bfebeab524354de1ec",
"text": "Text classification is an important problem with many applications. Traditional approaches represent text as a bagof-words and build classifiers based on this representation. Rather than words, entity phrases, the relations between the entities, as well as the types of the entities and relations carry much more information to represent the texts. This paper presents a novel text as network classification framework, which introduces 1) a structured and typed heterogeneous information networks (HINs) representation of texts, and 2) a meta-path based approach to link texts. We show that with the new representation and links of texts, the structured and typed information of entities and relations can be incorporated into kernels. Particularly, we develop both simple linear kernel and indefinite kernel based on metapaths in the HIN representation of texts, where we call them HIN-kernels. Using Freebase, a well-known world knowledge base, to construct HIN for texts, our experiments on two benchmark datasets show that the indefinite HIN-kernel based on weighted meta-paths outperforms the state-of-theart methods and other HIN-kernels.",
"title": ""
},
{
"docid": "9b10757ca3ca84784033c20f064078b7",
"text": "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaSbased applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system.",
"title": ""
}
] |
[
{
"docid": "db54705e3d975b6abba54a854e3e1158",
"text": "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as \"modularity\" over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.",
"title": ""
},
{
"docid": "268e0e06a23f495cc36958dafaaa045a",
"text": "Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one’s experiences—a hallmark of human intelligence from infancy—remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between “hand-engineering” and “end-to-end” learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias—the graph network—which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have also released an open-source software library for building graph networks, with demonstrations of how to use them in practice.",
"title": ""
},
{
"docid": "2d20574f353950f7805e85c55e023d37",
"text": "Stress has effect on speech characteristics and can influence the quality of speech. In this paper, we study the effect of SleepDeprivation (SD) on speech characteristics and classify Normal Speech (NS) and Sleep Deprived Speech (SDS). One of the indicators of sleep deprivation is flattened voice. We examine pitch and harmonic locations to analyse flatness of voice. To investigate, we compute the spectral coefficients that can capture the variations of pitch and harmonic patterns. These are derived using Two-Layer Cascaded-Subband Filter spread according to the pitch and harmonic frequency scale. Hidden Markov Model (HMM) is employed for statistical modeling. We use DCIEM map task corpus to conduct experiments. The analysis results show that SDS has less variation of pitch and harmonic pattern than NS. In addition, we achieve the relatively high accuracy for classification of Normal Speech (NS) and Sleep Deprived Speech (SDS) using proposed spectral coefficients.",
"title": ""
},
{
"docid": "43efacf740f920fb621cf870cb9102ce",
"text": "Vehicular Ad hoc Network (VANETs) help improve efficiency of security applications and road safety. Using the information exchanged between vehicles, the latter can warn drivers about dangerous situations. Detection and warning about such situations require reliable communication between vehicles. In fact, the IEEE 802.11p (WAVE: Wireless Access in the Vehicular Environment) was proposed to support the rapid exchange of data between the vehicles. Several Medium Access Control (MAC) protocols were also introduced for safety application VANET. In this paper, we present the different MAC basic protocols in VANET. We used simulation to compare and analyze their performances.",
"title": ""
},
{
"docid": "f1a7d6f8ae1e6b9ef837be3835f5b750",
"text": "of the 5th and 6th DIMACS Implementation Challenges, Goldwasser Johnson, and McGeoch (eds), American Mathematical Society, 2002. A Theoretician's Guide to the Experimental Analysis of Algorithms David S. Johnson AT&T Labs { Research http://www.research.att.com/ dsj/ November 25, 2001 Abstract This paper presents an informal discussion of issues that arise when one attempts to analyze algorithms experimentally. It is based on lessons learned by the author over the course of more than a decade of experimentation, survey paper writing, refereeing, and lively discussions with other experimentalists. Although written from the perspective of a theoretical computer scientist, it is intended to be of use to researchers from all elds who want to study algorithms experimentally. It has two goals: rst, to provide a useful guide to new experimentalists about how such work can best be performed and written up, and second, to challenge current researchers to think about whether their own work might be improved from a scienti c point of view. With the latter purpose in mind, the author hopes that at least a few of his recommendations will be considered controversial.",
"title": ""
},
{
"docid": "cdc5655770d58139ee3fb548022be2d5",
"text": "We propose a data mining approach to predict human wine taste preferences that is based on easily available analytical tests at the certification step. A large dataset (when compared to other studies in this domain) is considered, with white and red vinho verde samples (from Portugal). Three regression techniques were applied, under a computationally efficient procedure that performs simultaneous variable and model selection. The support vector machine achieved promising results, outperforming the multiple regression and neural network methods. Such model is useful to support the oenologist wine tasting evaluations and improve wine production. Furthermore, similar techniques can help in target marketing by modeling consumer tastes from niche markets.",
"title": ""
},
{
"docid": "2d43992a8eb6e97be676c04fc9ebd8dd",
"text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.",
"title": ""
},
{
"docid": "d063f8a20e2b6522fe637794e27d7275",
"text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.",
"title": ""
},
{
"docid": "db7bc8bbfd7dd778b2900973f2cfc18d",
"text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.",
"title": ""
},
{
"docid": "89865dbb80fcb2d9c5d4d4fe4fe10b83",
"text": "Elaborate efforts have been made to eliminate fake markings and refine <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula>-markings in the existing modified or improved Karp–Miller trees for various classes of unbounded Petri nets since the late 1980s. The main issues fundamentally are incurred due to the generation manners of the trees that prematurely introduce some potentially unbounded markings with <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula> symbols and keep their growth into new ones. Aiming at addressing them, this work presents a non-Karp–Miller tree called a lean reachability tree (LRT). First, a sufficient and necessary condition of the unbounded places and some reachability properties are established to reveal the features of unbounded nets. Then, we present an LRT generation algorithm with a sufficiently enabling condition (SEC). When generating a tree, SEC requires that the components of a covering node are not replaced by <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula> symbols, but continue to grow until any transition on an output path of an unbounded place has been branch-enabled at least once. In return, no fake marking is produced and no legal marking is lost during the tree generation. We prove that LRT can faithfully express by folding, instead of equivalently representing, the reachability set of an unbounded net. Also, some properties of LRT are examined and a sufficient condition of deadlock existence based on it is given. The case studies show that LRT outperforms the latest modified Karp–Miller trees in terms of size, expressiveness, and applicability. It can be applied to the analysis of the emerging discrete event systems with infinite states.",
"title": ""
},
{
"docid": "9b793826ceb4891f95c7e8b2ef7d72b4",
"text": "Machine-to-machine (M2M) communication, also referred to as Internet of Things (IoT), is a global network of devices such as sensors, actuators, and smart appliances which collect information, and can be controlled and managed in real time over the Internet. Due to their universal coverage, cellular networks and the Internet together offer the most promising foundation for the implementation of M2M communication. With the worldwide deployment of the fourth generation (4G) of cellular networks, the long-term evolution (LTE) and LTE-advanced standards have defined several quality-of-service classes to accommodate the M2M traffic. However, cellular networks are mainly optimized for human-to-human (H2H) communication. The characteristics of M2M traffic are different from the human-generated traffic and consequently create sever problems in both radio access and the core networks (CNs). This survey on M2M communication in LTE/LTE-A explores the issues, solutions, and the remaining challenges to enable and improve M2M communication over cellular networks. We first present an overview of the LTE networks and discuss the issues related to M2M applications on LTE. We investigate the traffic issues of M2M communications and the challenges they impose on both access channel and traffic channel of a radio access network and the congestion problems they create in the CN. We present a comprehensive review of the solutions for these problems which have been proposed in the literature in recent years and discuss the advantages and disadvantages of each method. The remaining challenges are also discussed in detail.",
"title": ""
},
{
"docid": "a64bf1840a6f7d82d5ca4dc10bf87453",
"text": "Cloud-based wireless networking system applies centralized resource pooling to improve operation efficiency. Fog-based wireless networking system reduces latency by placing processing units in the network edge. Confluence of fog and cloud design paradigms in 5G radio access network will better support diverse applications. In this article, we describe the recent advances in fog radio access network (F-RAN) research, hybrid fog-cloud architecture, and system design issues. Furthermore, the GPP platform facilitates the confluence of computational and communications processing. Through observations from GPP platform testbed experiments and simulations, we discuss the opportunities of integrating the GPP platform with F-RAN architecture.",
"title": ""
},
{
"docid": "06525bcc03586c8d319f5d6f1d95b852",
"text": "Many different automatic color correction approaches have been proposed by different research communities in the past decade. However, these approaches are seldom compared, so their relative performance and applicability are unclear. For multi-view image and video stitching applications, an ideal color correction approach should be effective at transferring the color palette of the source image to the target image, and meanwhile be able to extend the transferred color from the overlapped area to the full target image without creating visual artifacts. In this paper we evaluate the performance of color correction approaches for automatic multi-view image and video stitching. We consider nine color correction algorithms from the literature applied to 40 synthetic image pairs and 30 real mosaic image pairs selected from different applications. Experimental results show that both parametric and non-parametric approaches have members that are effective at transferring colors, while parametric approaches are generally better than non-parametric approaches in extendability.",
"title": ""
},
{
"docid": "d89ba95eb3bd7aca4a7acb17be973c06",
"text": "An UWB elliptical slot antenna embedded with open-end slit on the tuning stub or parasitic strip on the aperture for achieving the band-notch characteristics has been proposed in this conference. Experimental results have also confirmed band-rejection capability for the proposed antenna at the desired band, as well as nearly omni-direction radiation features is still preserved. Finally, how to shrink the geometry dimensions of the UWB antenna will be investigated in the future.",
"title": ""
},
{
"docid": "134e5a0da9a6aa9b3c5e10a69803c3a3",
"text": "The objectives of this study were to determine the prevalence of overweight and obesity in Turkey, and to investigate their association with age, gender, and blood pressure. A crosssectional population-based study was performed. A total of 20,119 inhabitants (4975 women and 15,144 men, age > 20 years) from 11 Anatolian cities in four geographic regions were screened for body weight, height, and systolic and diastolic blood pressure between the years 1999 and 2000. The overall prevalence rate of overweight was 25.0% and of obesity was 19.4%. The prevalence of overweight among women was 24.3% and obesity 24.6%; 25.9% of men were overweight, and 14.4% were obese. Mean body mass index (BMI) of the studied population was 27.59 +/- 4.61 kg/m(2). Mean systolic and diastolic blood pressure for women were 131.0 +/- 41.0 and 80.2 +/- 16.3 mm Hg, and for men 135.0 +/- 27.3 and 83.2 +/- 16.0 mm Hg. There was a positive linear correlation between BMI and blood pressure, and between age and blood pressure in men and women. Obesity and overweight are highly prevalant in Turkey, and they constitute independent risk factors for hypertension.",
"title": ""
},
{
"docid": "13c8d93a834e4a82f229239dc26d8775",
"text": "The popularity of Twitter for information discovery, coupled with the automatic shortening of URLs to save space, given the 140 character limit, provides cybercriminals with an opportunity to obfuscate the URL of a malicious Web page within a tweet. Once the URL is obfuscated, the cybercriminal can lure a user to click on it with enticing text and images before carrying out a cyber attack using a malicious Web server. This is known as a drive-by download. In a drive-by download a user's computer system is infected while interacting with the malicious endpoint, often without them being made aware the attack has taken place. An attacker can gain control of the system by exploiting unpatched system vulnerabilities and this form of attack currently represents one of the most common methods employed. In this paper we build a machine learning model using machine activity data and tweet metadata to move beyond post-execution classification of such URLs as malicious, to predict a URL will be malicious with 0.99 F-measure (using 10-fold cross-validation) and 0.833 (using an unseen test set) at 1 s into the interaction with the URL. Thus, providing a basis from which to kill the connection to the server before an attack has completed and proactively blocking and preventing an attack, rather than reacting and repairing at a later date.",
"title": ""
},
{
"docid": "46b5e1898dba479b7158ce5c9c0b94a8",
"text": "Finding a parking place in a busy city centre is often a frustrating task for many drivers; time and fuel are wasted in the quest for a vacant spot and traffic in the area increases due to the slow moving vehicles circling around. In this paper, we present the results of a survey on the needs of drivers from parking infrastructures from a smart services perspective. As smart parking systems are becoming a necessity in today's urban areas, we discuss the latest trends in parking availability monitoring, parking reservation and dynamic pricing schemes. We also examine how these schemes can be integrated forming technologically advanced parking infrastructures whose aim is to benefit both the drivers and the parking operators alike.",
"title": ""
},
{
"docid": "3e974f6838a652cf19e4dac68b119286",
"text": "Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.",
"title": ""
},
{
"docid": "5dfc0ec364055f79d19ee8cf0b0cfeff",
"text": "Cancer cachexia is a common problem among advanced cancer patients. A mixture of β-hydroxyl β-methyl butyrate, glutamine, and arginine (HMB/Arg/Gln) previously showed activity for increasing lean body mass (LBM) among patients with cancer cachexia. Therefore a phase III trial was implemented to confirm this activity. Four hundred seventy-two advanced cancer patients with between 2% and 10% weight loss were randomized to a mixture of β-hydroxyl β-methyl butyrate, glutamine, and arginine or an isonitrogenous, isocaloric control mixture taken twice a day for 8 weeks. Lean body mass was estimated by bioimpedance and skin-fold measurements. Body plethysmography was used when available. Weight, the Schwartz Fatigue Scale, and the Spitzer Quality of Life Scale were also measured. Only 37% of the patients completed protocol treatment. The majority of the patient loss was because of patient preference (45% of enrolled patients). However, loss of power was not an issue because of the planned large target sample size. Based on an intention to treat analysis, there was no statistically significant difference in the 8-week lean body mass between the two arms. The secondary endpoints were also not significantly different between the arms. Based on the results of the area under the curve (AUC) analysis, patients receiving HMB/Arg/Gln had a strong trend higher LBM throughout the study as measured by both bioimpedance (p = 0.08) and skin-fold measurements (p = 0.08). Among the subset of patients receiving concurrent chemotherapy, there were again no significant differences in the endpoints. The secondary endpoints were also not significantly different between the arms. This trial was unable to adequately test the ability of β-hydroxy β-methylbutyrate, glutamine, and arginine to reverse or prevent lean body mass wasting among cancer patients. Possible contributing factors beyond the efficacy of the intervention were the inability of patients to complete an 8-week course of treatment and return in a timely fashion for follow-up assessment, and because the patients may have only had weight loss possible not related to cachexia, but other causes of weight loss, such as decreased appetite. However, there was a strong trend towards an increased body mass among patients taking the Juven® compound using the secondary endpoint of AUC.",
"title": ""
},
{
"docid": "80ca2b3737895e9222346109ac092637",
"text": "The common ground between figurative language and humour (in the form of jokes) is what Koestler (1964) termed the bisociation of ideas. In both jokes and metaphors, two disparate concepts are brought together, but the nature and the purpose of this conjunction is different in each case. This paper focuses on this notion of boundaries and attempts to go further by asking the question “when does a metaphor become a joke?”. More specifically, the main research questions of the paper are: (a) How do speakers use metaphor in discourse for humorous purposes? (b) What are the (metaphoric) cognitive processes that relate to the creation of humour in discourse? (c) What does the study of humour in discourse reveal about the nature of metaphoricity? This paper answers these questions by examining examples taken from a three-hour conversation, and considers how linguistic theories of humour (Raskin, 1985; Attardo and Raskin, 1991; Attardo, 1994; 2001) and cognitive theories of metaphor and blending (Lakoff and Johnson, 1980; Fauconnier and Turner, 2002) can benefit from each other. Boundaries in Humour and Metaphor The goal of this paper is to explore the relationship between metaphor (and, more generally, blending) and humour, in order to attain a better understanding of the cognitive processes that are involved or even contribute to laughter in discourse. This section will present briefly research in both areas and will identify possible common ground between the two. More specifically, the notion of boundaries will be explored in both areas. The following section explores how metaphor can be used for humorous purposes in discourse by applying relevant theories of humour and metaphor to conversational data. Linguistic theories of humour highlight the importance of duality and tension in humorous texts. Koestler (1964: 51) in discussing comic creativity notes that: The sudden bisociation of an idea or event with two habitually incompatible matrices will produce a comic effect, provided that the narrative, the semantic pipeline, carries the right kind of emotional tension. When the pipe is punctured, and our expectations are fooled, the now redundant tension gushes out in laughter, or is spilled in the gentler form of the sou-rire [my emphasis]. This oft-quoted passage introduces the basic themes and mechanisms that later were explored extensively within contemporary theories of humour: a humorous text must relate to two different and opposing in some way scenarios; this duality is not",
"title": ""
}
] |
scidocsrr
|
5c834f5f0c836067419cae60d9fbdede
|
Stance Classification in Rumours as a Sequential Task Exploiting the Tree Structure of Social Media Conversations
|
[
{
"docid": "4ac3c3fb712a1121e0990078010fe4b0",
"text": "1.1 Introduction Relational data has two characteristics: first, statistical dependencies exist between the entities we wish to model, and second, each entity often has a rich set of features that can aid classification. For example, when classifying Web documents, the page's text provides much information about the class label, but hyperlinks define a relationship between pages that can improve classification [Taskar et al., 2002]. Graphical models are a natural formalism for exploiting the dependence structure among entities. Traditionally, graphical models have been used to represent the joint probability distribution p(y, x), where the variables y represent the attributes of the entities that we wish to predict, and the input variables x represent our observed knowledge about the entities. But modeling the joint distribution can lead to difficulties when using the rich local features that can occur in relational data, because it requires modeling the distribution p(x), which can include complex dependencies. Modeling these dependencies among inputs can lead to intractable models, but ignoring them can lead to reduced performance. A solution to this problem is to directly model the conditional distribution p(y|x), which is sufficient for classification. This is the approach taken by conditional random fields [Lafferty et al., 2001]. A conditional random field is simply a conditional distribution p(y|x) with an associated graphical structure. Because the model is",
"title": ""
},
{
"docid": "e4dd72a52d4961f8d4d8ee9b5b40d821",
"text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.",
"title": ""
},
{
"docid": "7641f8f3ed2afd0c16665b44c1216e79",
"text": "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.",
"title": ""
},
{
"docid": "f2478e4b1156e112f84adbc24a649d04",
"text": "Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES/NO question with yes, no, or unsure, based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.",
"title": ""
}
] |
[
{
"docid": "bdadf0088654060b3f1c749ead0eea6e",
"text": "This article gives an introduction and overview of the field of pervasive gaming, an emerging genre in which traditional, real-world games are augmented with computing functionality, or, depending on the perspective, purely virtual computer entertainment is brought back to the real world.The field of pervasive games is diverse in the approaches and technologies used to create new and exciting gaming experiences that profit by the blend of real and virtual game elements. We explicitly look at the pervasive gaming sub-genres of smart toys, affective games, tabletop games, location-aware games, and augmented reality games, and discuss them in terms of their benefits and critical issues, as well as the relevant technology base.",
"title": ""
},
{
"docid": "9bdee31e49213cd33d157b61ea788230",
"text": "Situational understanding (SU) requires a combination of insight — the ability to accurately perceive an existing situation — and foresight — the ability to anticipate how an existing situation may develop in the future. SU involves information fusion as well as model representation and inference. Commonly, heterogenous data sources must be exploited in the fusion process: often including both hard and soft data products. In a coalition context, data and processing resources will also be distributed and subjected to restrictions on information sharing. It will often be necessary for a human to be in the loop in SU processes, to provide key input and guidance, and to interpret outputs in a way that necessitates a degree of transparency in the processing: systems cannot be “black boxes”. In this paper, we characterize the Coalition Situational Understanding (CSU) problem in terms of fusion, temporal, distributed, and human requirements. There is currently significant interest in deep learning (DL) approaches for processing both hard and soft data. We analyze the state-of-the-art in DL in relation to these requirements for CSU, and identify areas where there is currently considerable promise, and key gaps.",
"title": ""
},
{
"docid": "9592fc0ec54a5216562478414dc68eb4",
"text": "We consider the problem of finding the best arm in a stochastic multi-armed bandit game. The regret of a forecaster is here defined by the gap between the mean reward of the optimal arm and the mean reward of the ultimately chosen arm. We propose a highly exploring UCB policy and a new algorithm based on successive rejects. We show that these algorithms are essentially optimal since their regret decreases exponentially at a rate which is, up to a logarithmic factor, the best possible. However, while the UCB policy needs the tuning of a parameter depending on the unobservable hardness of the task, the successive rejects policy benefits from being parameter-free, and also independent of the scaling of the rewards. As a by-product of our analysis, we show that identifying the best arm (when it is unique) requires a number of samples of order (up to a log(K) factor) ∑ i 1/∆ 2 i , where the sum is on the suboptimal arms and ∆i represents the difference between the mean reward of the best arm and the one of arm i. This generalizes the well-known fact that one needs of order of 1/∆ samples to differentiate the means of two distributions with gap ∆.",
"title": ""
},
{
"docid": "1ca692464d5d7f4e61647bf728941519",
"text": "During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RS(C)) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RS(C) neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses.",
"title": ""
},
{
"docid": "83c0e0c81a809314e93471e9bcd6aabe",
"text": "A rail-to-rail amplifier with an offset cancellation, which is suitable for high color depth and high-resolution liquid crystal display (LCD) drivers, is proposed. The amplifier incorporates dual complementary differential pairs, which are classified as main and auxiliary transconductance amplifiers, to obtain a full input voltage swing and an offset canceling capability. Both offset voltage and injection-induced error, due to the device mismatch and charge injection, respectively, are greatly reduced. The offset cancellation and charge conservation, which is used to reduce the dynamic power consumption, are operated during the same time slot so that the driving period does not need to increase. An experimental prototype amplifier is implemented with 0.35m CMOS technology. The circuit draws 7.5 A static current and exhibits the settling time of 3 s, for a voltage swing of 5 V under a 3.4 k resistance, and a 140 pF capacitance load with a power supply of 5 V. The offset voltage of the amplifier with offset cancellation is 0.48 mV.",
"title": ""
},
{
"docid": "f1773b7fcd2ab70273f096b6da77b7a4",
"text": "The senses we call upon when interacting with technology are restricted. We mostly rely on vision and hearing, and increasingly touch, but taste and smell remain largely unused. Although our knowledge about sensory systems and devices has grown rapidly over the past few decades, there is still an unmet challenge in understanding people's multisensory experiences in HCI. The goal is that by understanding the ways in which our senses process information and how they relate to one another, it will be possible to create richer experiences for human-‐ technology interactions. To meet this challenge, we need specific actions within the HCI community. First, we must determine which tactile, gustatory, and olfactory experiences we can design for, and how to meaningfully stimulate them when people interact with technology. Second, we need to build on previous frameworks for multisensory design while also creating new ones. Third, we need to design interfaces that allow the stimulation of unexplored sensory inputs (e.g., digital smell), as well as interfaces that take into account the relationships between the senses (e.g., integration of taste and smell into flavor). Finally, it is vital to understand what limitations come into play when users need to monitor information from more than one sense simultaneously. Though much development is needed, in recent years we have witnessed progress in multisensory experiences involving touch. It is key for HCI to leverage the full range of tactile sensations (vibrations, pressure, force, balance, heat, coolness/wetness, electric shocks, pain and itch, etc.), taking into account the active and passive modes of touch and its integration with the other senses. This will undoubtedly provide new tools for interactive experience design, and will help to uncover the fine granularity of sensory stimulation and emotional responses.",
"title": ""
},
{
"docid": "d00691959822087a1bddc3b411d27239",
"text": "We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.",
"title": ""
},
{
"docid": "6e00567c5c33d899af9b5a67e37711a3",
"text": "The adoption of cloud computing facilities and programming models differs vastly between different application domains. Scalable web applications, low-latency mobile backends and on-demand provisioned databases are typical cases for which cloud services on the platform or infrastructure level exist and are convincing when considering technical and economical arguments. Applications with specific processing demands, including high-performance computing, high-throughput computing and certain flavours of scientific computing, have historically required special configurations such as computeor memory-optimised virtual machine instances. With the rise of function-level compute instances through Function-as-a-Service (FaaS) models, the fitness of generic configurations needs to be re-evaluated for these applications. We analyse several demanding computing tasks with regards to how FaaS models compare against conventional monolithic algorithm execution. Beside the comparison, we contribute a refined FaaSification process for legacy software and provide a roadmap for future work. 1 Research Direction The ability to turn programmed functions or methods into ready-to-use cloud services is leading to a seemingly serverless development and deployment experience for application software engineers [1]. Without the necessity to allocate resources beforehand, prototyping new features and workflows becomes faster and more convenient to application service providers. These advantages have given boost to an industry trend consequently called Serverless Computing. The more precise, almost overlapping term in accordance with Everything-asa-Service (XaaS) cloud computing taxonomies is Function-as-a-Service (FaaS) [4]. In the FaaS layer, functions, either on the programming language level or as abstract concept around binary implementations, are executed synchronously or asynchronously through multi-protocol triggers. Function instances are provisioned on demand through coldstart or warmstart of the implementation in conjunction with an associated configuration in few milliseconds, elastically scaled as needed, and charged per invocation and per product of period of time and resource usage, leading to an almost perfect pay-as-you-go utility pricing model [11]. FaaS is gaining traction primarily in three areas. First, in Internet-of-Things applications where connected devices emit data sporadically. Second, for web applications with light-weight backend tasks. Third, as glue code between other cloud computing services. In contrast to the industrial popularity, no work is known to us which explores its potential for scientific and high-performance computing applications with more demanding execution requirements. From a cloud economics and strategy perspective, FaaS is a refinement of the platform layer (PaaS) with particular tools and interfaces. Yet from a software engineering and deployment perspective, functions are complementing other artefact types which are deployed into PaaS or underlying IaaS environments. Fig. 1 explains this positioning within the layered IaaS, PaaS and SaaS service classes, where the FaaS runtime itself is subsumed under runtime stacks. Performing experimental or computational science research with FaaS implies that the two roles shown, end user and application engineer, are adopted by a single researcher or a team of researchers, which is the setting for our research. Fig. 1. Positioning of FaaS in cloud application development The necessity to conduct research on FaaS for further application domains stems from the unique execution characteristics. Service instances are heuristically stateless, ephemeral, and furthermore limited in resource allotment and execution time. They are moreover isolated from each other and from the function management and control plane. In public commercial offerings, they are billed in subsecond intervals and terminated after few minutes, but as with any cloud application, private deployments are also possible. Hence, there is a trade-off between advantages and drawbacks which requires further analysis. For example, existing parallelisation frameworks cannot easily be used at runtime as function instances can only, in limited ways, invoke other functions without the ability to configure their settings. Instead, any such parallelisation needs to be performed before deployment with language-specific tools such as Pydron for Python [10] or Calvert’s compiler for Java [3]. For resourceand time-demanding applications, no special-purpose FaaS instances are offered by commercial cloud providers. This is a surprising observation given the multitude of options in other cloud compute services beyond general-purpose offerings, especially on the infrastructure level (IaaS). These include instance types optimised for data processing (with latest-generation processors and programmable GPUs), for memory allocation, and for non-volatile storage (with SSDs). Amazon Web Services (AWS) alone offers 57 different instance types. Our work is therefore concerned with the assessment of how current generic one-size-fits-all FaaS offerings handle scientific computing workloads, whether the proliferation of specialised FaaS instance types can be expected and how they would differ from commonly offered IaaS instance types. In this paper, we contribute specifically (i) a refined view on how software can be made fitting into special-purpose FaaS contexts with a high degree of automation through a process named FaaSification, and (ii) concepts and tools to execute such functions in constrained environments. In the remainder of the paper, we first present background information about FaaS runtimes, including our own prototypes which allow for providerindependent evaluations. Subsequently, we present four domain-specific scientific experiments conducted using FaaS to gain broad knowledge about resource requirements beyond general-purpose instances. We summarise the findings and reason about the implications for future scientific computing infrastructures. 2 Background on Function-as-a-Service 2.1 Programming Models and Runtimes The characteristics of function execution depend primarily on the FaaS runtime in use. There are broadly three categories of runtimes: 1. Proprietary commercial services, such as AWS Lambda, Google Cloud Functions, Azure Functions and Oracle Functions. 2. Open source alternatives with almost matching interfaces and functionality, such as Docker-LambCI, Effe, Google Cloud Functions Emulator and OpenLambda [6], some of which focus on local testing rather than operation. 3. Distinct open source implementations with unique designs, such as Apache OpenWhisk, Kubeless, IronFunctions and Fission, some of which are also available as commercial services, for instance IBM Bluemix OpenWhisk [5]. The uniqueness is a consequence of the integration with other cloud stacks (Kubernetes, OpenStack), the availability of web and command-line interfaces, the set of triggers and the level of isolation in multi-tenant operation scenarios, which is often achieved through containers. In addition, due to the often non-trivial configuration of these services, a number of mostly service-specific abstraction frameworks have become popular among developers, such as PyWren, Chalice, Zappa, Apex and the Serverless Framework [8]. The frameworks and runtimes differ in their support for programming languages, but also in the function signatures, parameters and return values. Hence, a comparison of the entire set of offerings requires a baseline. The research in this paper is congruously conducted with the mentioned commercial FaaS providers as well as with our open-source FaaS tool Snafu which allows for managing, executing and testing functions across provider-specific interfaces [14]. The service ecosystem relationship between Snafu and the commercial FaaS providers is shown in Fig. 2. Snafu is able to import services from three providers (AWS Lambda, IBM Bluemix OpenWhisk, Google Cloud Functions) and furthermore offers a compatible control plane to all three of them in its current implementation version. At its core, it contains a modular runtime environment with prototypical maturity for functions implemented in JavaScript, Java, Python and C. Most importantly, it enables repeatable research as it can be deployed as a container, in a virtual machine or on a bare metal workstation. Notably absent from the categories above are FaaS offerings in e-science infrastructures and research clouds, despite the programming model resembling widely used job submission systems. We expect our practical research contributions to overcome this restriction in a vendor-independent manner. Snafu, for instance, is already available as an alpha-version launch profile in the CloudLab testbed federated across several U.S. installations with a total capacity of almost 15000 cores [12], as well as in EGI’s federated cloud across Europe. Fig. 2. Snafu and its ecosystem and tooling Using Snafu, it is possible to adhere to the diverse programming conventions and execution conditions at commercial services while at the same time controlling and lifting the execution restrictions as necessary. In particular, it is possible to define memory-optimised, storage-optimised and compute-optimised execution profiles which serve to conduct the anticipated research on generic (general-purpose) versus specialised (special-purpose) cloud offerings for scientific computing. Snafu can execute in single process mode as well as in a loadbalancing setup where each request is forwarded by the master instance to a slave instance which in turn executes the function natively, through a languagespecific interpreter or through a container. Table 1 summarises the features of selected FaaS runtimes. Table 1. FaaS runtimes and their features Runtime Languages Programming model Import/Export AWS Lambda JavaScript, Python, Java, C# Lambda – Google Cloud Functions JavaScrip",
"title": ""
},
{
"docid": "088cb7992c1d7910151b1008a70e5cd1",
"text": "Cable-actuated parallel manipulators (CPMs) rely on cables instead of rigid links to manipulate the moving platform in the taskspace. Upper and lower bounds imposed on the cable tensions limit the force capability in CPMs and render certain forces infeasible at the end effector. This paper presents a geometrical analysis of the problems to 1) determine whether a CPM is capable of balancing a given wrench within the cable tension limits (feasibility check); 2) minimize the 2-norm of the cable tensions that balance feasible wrenches; and 3) check for the existence of an all-positive nullspace vector, which is a necessary condition to have a wrench-closure configuration in CPMs. The unified approach used in this analysis is systematic and geometrically intuitive that is based on the formulation of the static force equilibrium problem as an intersection between two convex sets and the application of Dykstra's alternating projection algorithm to find the projection of a point onto that intersection. In the case of infeasible wrenches, the algorithm can determine whether the infeasibility is because of the cable tension limits or the non-wrench-closure configuration. For the former case, a method was developed by which this algorithm can be used to extend the cable tension limits to balance infeasible wrenches. In addition, the performance of the algorithm is explained in the case of incompletely restrained cable-driven manipulators and the case of manipulators at singular poses. This paper also discusses the algorithm convergence and termination rule. This geometrical and systematic approach is intended for use as a convenient tool for cable tension analysis during design.",
"title": ""
},
{
"docid": "0472c8c606024aaf2700dee3ad020c07",
"text": "Any discussion on exchange rate movements and forecasting should include explanatory variables from both the current account and the capital account of the balance of payments. In this paper, we include such factors to forecast the value of the Indian rupee vis a vis the US Dollar. Further, factors reflecting political instability and lack of mechanism for enforcement of contracts that can affect both direct foreign investment and also portfolio investment, have been incorporated. The explanatory variables chosen are the 3 month Rupee Dollar futures exchange rate (FX4), NIFTY returns (NIFTYR), Dow Jones Industrial Average returns (DJIAR), Hang Seng returns (HSR), DAX returns (DR), crude oil price (COP), CBOE VIX (CV) and India VIX (IV). To forecast the exchange rate, we have used two different classes of frameworks namely, Artificial Neural Network (ANN) based models and Time Series Econometric models. Multilayer Feed Forward Neural Network (MLFFNN) and Nonlinear Autoregressive models with Exogenous Input (NARX) Neural Network are the approaches that we have used as ANN models. Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential Generalized Autoregressive Conditional Heteroskedastic (EGARCH) techniques are the ones that we have used as Time Series Econometric methods. Within our framework, our results indicate that, although the two different approaches are quite efficient in forecasting the exchange rate, MLFNN and NARX are the most efficient. Journal of Insurance and Financial Management ARTICLE INFO JEL Classification: C22 C45 C63 F31 F47",
"title": ""
},
{
"docid": "d6b87f5b6627f1a1ac5cc951c7fe0f28",
"text": "Despite a strong nonlinear behavior and a complex design, the interior permanent-magnet (IPM) machine is proposed as a good candidate among the PM machines owing to its interesting peculiarities, i.e., higher torque in flux-weakening operation, higher fault tolerance, and ability to adopt low-cost PMs. A second trend in designing PM machines concerns the adoption of fractional-slot (FS) nonoverlapped coil windings, which reduce the end winding length and consequently the Joule losses and the cost. Therefore, the adoption of an IPM machine with an FS winding aims to combine both advantages: high torque and efficiency in a wide operating region. However, the combination of an anisotropic rotor and an FS winding stator causes some problems. The interaction between the magnetomotive force harmonics due to the stator current and the rotor anisotropy causes a very high torque ripple. This paper illustrates a procedure in designing an IPM motor with the FS winding exhibiting a low torque ripple. The design strategy is based on two consecutive steps: at first, the winding is optimized by taking a multilayer structure, and then, the rotor geometry is optimized by adopting a nonsymmetric structure. As an example, a 12-slot 10-pole IPM machine is considered, achieving a torque ripple lower than 1.5% at full load.",
"title": ""
},
{
"docid": "66c57a94a5531b36199bd52521a56ccb",
"text": "This project describes design and experimental analysis of composite leaf spring made of glass fiber reinforced polymer. The objective is to compare the load carrying capacity, stiffness and weight savings of composite leaf spring with that of steel leaf spring. The design constraints are stresses and deflections. The dimensions of an existing conventional steel leaf spring of a light commercial vehicle are taken. Same dimensions of conventional leaf spring are used to fabricate a composite multi leaf spring using E-Glass/Epoxy unidirectional laminates. Static analysis of 2-D model of conventional leaf spring is also performed using ANSYS 10 and compared with experimental results. Finite element analysis with full load on 3-D model of composite multi leaf spring is done using ANSYS 10 and the analytical results are compared with experimental results. Compared to steel spring, the composite leaf spring is found to have 67.35% lesser stress, 64.95% higher stiffness and 126.98% higher natural frequency than that of existing steel leaf spring. A weight reduction of 76.4% is achieved by using optimized composite leaf spring.",
"title": ""
},
{
"docid": "b07f858d08f40f61f3ed418674948f12",
"text": "Nowadays, due to the great distance between design and implementation worlds, different skills are necessary to create a game system. To solve this problem, a lot of strategies for game development, trying to increase the abstraction level necessary for the game production, were proposed. In this way, a lot of game engines, game frameworks and others, in most cases without any compatibility or reuse criteria between them, were developed. This paper presents a new generative programming approach, able to increase the production of a digital game by the integration of different game development artifacts, following a system family strategy focused on variable and common aspects of a computer game. As result, high level abstractions of games, based on a common language, can be used to configure met programming transformations during the game production, providing a great compatibility level between game domain and game implementation artifacts.",
"title": ""
},
{
"docid": "d63946a096b9e8a99be6d5ddfe4097da",
"text": "While the first open comparative challenges in the field of paralinguistics targeted more ‘conventional’ phenomena such as emotion, age, and gender, there still exists a multiplicity of not yet covered, but highly relevant speaker states and traits. The INTERSPEECH 2011 Speaker State Challenge thus addresses two new sub-challenges to overcome the usually low compatibility of results: In the Intoxication Sub-Challenge, alcoholisation of speakers has to be determined in two classes; in the Sleepiness Sub-Challenge, another two-class classification task has to be solved. This paper introduces the conditions, the Challenge corpora “Alcohol Language Corpus” and “Sleepy Language Corpus”, and a standard feature set that may be used. Further, baseline results are given.",
"title": ""
},
{
"docid": "0b6ce2e4f3ef7f747f38068adef3da54",
"text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.",
"title": ""
},
{
"docid": "e858a3bda1ac2568afa328cd4352c804",
"text": "Bilingual advantages in executive control tasks are well documented, but it is not yet clear what degree or type of bilingualism leads to these advantages. To investigate this issue, we compared the performance of two bilingual groups and monolingual speakers in task-switching and language-switching paradigms. Spanish-English bilinguals, who reported switching between languages frequently in daily life, exhibited smaller task-switching costs than monolinguals after controlling for between-group differences in speed and parent education level. By contrast, Mandarin-English bilinguals, who reported switching languages less frequently than Spanish-English bilinguals, did not exhibit a task-switching advantage relative to monolinguals. Comparing the two bilingual groups in language-switching, Spanish-English bilinguals exhibited smaller costs than Mandarin-English bilinguals, even after matching for fluency in the non-dominant language. These results demonstrate an explicit link between language-switching and bilingual advantages in task-switching, while also illustrating some limitations on bilingual advantages.",
"title": ""
},
{
"docid": "8109594325601247cdb253dbb76b9592",
"text": "Disturbance compensation is one of the major problems in control system design. Due to external disturbance or model uncertainty that can be treated as disturbance, all control systems are subject to disturbances. When it comes to networked control systems, not only disturbances but also time delay is inevitable where controllers are remotely connected to plants through communication network. Hence, simultaneous compensation for disturbance and time delay is important. Prior work includes a various combinations of smith predictor, internal model control, and disturbance observer tailored to simultaneous compensation of both time delay and disturbance. In particular, simplified internal model control simultaneously compensates for time delay and disturbances. But simplified internal model control is not applicable to the plants that have two poles at the origin. We propose a modified simplified internal model control augmented with disturbance observer which simultaneously compensates time delay and disturbances for the plants with two poles at the origin. Simulation results are provided.",
"title": ""
},
{
"docid": "81126b57a29b4c9aee46ecb04c7f43ca",
"text": "Within the field of bibliometrics, there is sustained interest in how nations “compete” in terms of academic disciplines, and what determinants explain why countries may have a specific advantage in one discipline over another. However, this literature has not, to date, presented a comprehensive structured model that could be used in the interpretation of a country’s research profile and aca‐ demic output. In this paper, we use frameworks from international business and economics to pre‐ sent such a model. Our study makes four major contributions. First, we include a very wide range of countries and disci‐ plines, explicitly including the Social Sciences, which unfortunately are excluded in most bibliometrics studies. Second, we apply theories of revealed comparative advantage and the competitive ad‐ vantage of nations to academic disciplines. Third, we cluster our 34 countries into five different groups that have distinct combinations of revealed comparative advantage in five major disciplines. Finally, based on our empirical work and prior literature, we present an academic diamond that de‐ tails factors likely to explain a country’s research profile and competitiveness in certain disciplines.",
"title": ""
},
{
"docid": "5aee510b62d8792a38044fc8c68a57e4",
"text": "In this paper we present a novel method for jointly extracting beats and downbeats from audio signals. A recurrent neural network operating directly on magnitude spectrograms is used to model the metrical structure of the audio signals at multiple levels and provides an output feature that clearly distinguishes between beats and downbeats. A dynamic Bayesian network is then used to model bars of variable length and align the predicted beat and downbeat positions to the global best solution. We find that the proposed model achieves state-of-the-art performance on a wide range of different musical genres and styles.",
"title": ""
},
{
"docid": "87a11f6097cb853b7c98e17cdf97801e",
"text": "Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we compare the two architectures—recurrent versus non-recurrent—with respect to their ability to model hierarchical structure and find that recurrency is indeed important for this purpose. The code and data used in our experiments is available at https://github.com/",
"title": ""
}
] |
scidocsrr
|
bcbe99733d48107626df7954b4ef2526
|
Smart tourism: foundations and developments
|
[
{
"docid": "72221bf6d95f297449fd2c7b646488e9",
"text": "Recent changes in service environments have changed the preconditions of their production and consumption. These changes include unbundling services from production processes, growth of the information-rich economy and society, the search for creativity in service production and consumption and continuing growth of digital technologies. These contextual changes affect city governments because they provide a range of infrastructure and welfare services to citizens. Concepts such as ‘smart city’, ‘intelligent city’ and ‘knowledge city’ build new horizons for cities in undertaking their challenging service functions in an increasingly cost-conscious, competitive and environmentally oriented setting. What is essential in practically all of them is that they paint a picture of cities with smooth information processes, facilitation of creativity and innovativeness, and smart and sustainable solutions promoted through service platforms. This article discusses this topic, starting from the nature of services and the new service economy as the context of smart local public services. On this basis, we build an overall framework for understanding the basic forms and dimensions of smart public services. The focus is on conceptual systematisation of the key dimensions of smart services and the conceptual modelling of smart service platforms through which digital technology is increasingly embedded in social creativity. We provide examples of real-life smart service applications within the European context.",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "d81e35229c0fc0b9c7d498a254a4d6be",
"text": "Recent advances in the field of technology have led to the emergence of innovative technological smart solutions providing unprecedented opportunities for application in the tourism and hospitality industry. With intensified competition in the tourism market place, it has become paramount for businesses to explore the potential of technologies, not only to optimize existing processes but facilitate the creation of more meaningful and personalized services and experiences. This study aims to bridge the current knowledge gap between smart technologies and experience personalization to understand how smart mobile technologies can facilitate personalized experiences in the context of the hospitality industry. By adopting a qualitative case study approach, this paper makes a two-fold contribution; it a) identifies the requirements of smart technologies for experience creation, including information aggregation, ubiquitous mobile connectedness and real time synchronization and b) highlights how smart technology integration can lead to two distinct levels of personalized tourism experiences. The paper concludes with the development of a model depicting the dynamic process of experience personalization and a discussion of the strategic implications for tourism and hospitality management and research.",
"title": ""
}
] |
[
{
"docid": "e33080761e4ece057f455148c7329d5e",
"text": "This paper compares the utilization of ConceptNet and WordNet in query expansion. Spreading activation selects candidate terms for query expansion from these two resources. Three measures including discrimination ability, concept diversity, and retrieval performance are used for comparisons. The topics and document collections in the ad hoc track of TREC-6, TREC-7 and TREC-8 are adopted in the experiments. The results show that ConceptNet and WordNet are complementary. Queries expanded with WordNet have higher discrimination ability. In contrast, queries expanded with ConceptNet have higher concept diversity. The performance of queries expanded by selecting the candidate terms from ConceptNet and WordNet outperforms that of queries without expansion, and queries expanded with a single resource.",
"title": ""
},
{
"docid": "00ff2d5e2ca1d913cbed769fe59793d4",
"text": "In recent work, we showed that putatively adaptive emotion regulation strategies, such as reappraisal and acceptance, have a weaker association with psychopathology than putatively maladaptive strategies, such as rumination, suppression, and avoidance (e.g., Aldao & Nolen-Hoeksema, 2010; Aldao, Nolen-Hoeksema, & Schweizer, 2010). In this investigation, we examined the interaction between adaptive and maladaptive emotion regulation strategies in the prediction of psychopathology symptoms (depression, anxiety, and alcohol problems) concurrently and prospectively. We assessed trait emotion regulation and psychopathology symptoms in a sample of community residents at Time 1 (N = 1,317) and then reassessed psychopathology at Time 2 (N = 1,132). Cross-sectionally, we found that the relationship between adaptive strategies and psychopathology symptoms was moderated by levels of maladaptive strategies: adaptive strategies had a negative association with psychopathology symptoms only at high levels of maladaptive strategies. In contrast, adaptive strategies showed no prospective relationship to psychopathology symptoms either alone or in interaction with maladaptive strategies. We discuss the implications of this investigation for future work on the contextual factors surrounding the deployment of emotion regulation strategies.",
"title": ""
},
{
"docid": "27e10b0ba009a8b86431a808e712d761",
"text": "In this work, we propose using camera arrays coupled with coherent illumination as an effective method of improving spatial resolution in long distance images by a factor often and beyond. Recent advances in ptychography have demonstrated that one can image beyond the diffraction limit of the objective lens in a microscope. We demonstrate a similar imaging system to image beyond the diffraction limit in long range imaging. We emulate a camera array with a single camera attached to an XY translation stage. We show that an appropriate phase retrieval based reconstruction algorithm can be used to effectively recover the lost high resolution details from the multiple low resolution acquired images. We analyze the effects of noise, required degree of image overlap, and the effect of increasing synthetic aperture size on the reconstructed image quality. We show that coherent camera arrays have the potential to greatly improve imaging performance. Our simulations show resolution gains of 10× and more are achievable. Furthermore, experimental results from our proof-of-concept systems show resolution gains of 4 × -7× for real scenes. All experimental data and code is made publicly available on the project webpage. Finally, we introduce and analyze in simulation a new strategy to capture macroscopic Fourier Ptychography images in a single snapshot, albeit using a camera array.",
"title": ""
},
{
"docid": "6de8ae942642948928028da20dd548d5",
"text": "This paper describes the design, construction, and operation of a closed-loop spherical induction motor (SIM) ball wheel for a balancing mobile robot (ballbot). Following earlier work, this new design has a smaller rotor and higher torques due to the use of six stators in a skewed layout. Actuation and sensing kinematics as well as control methods are presented. In its current implementation, torques of up to 8 Nm are produced by the motor with rise and decay times of 100 ms. Results are presented supporting its potential as a prime mover for mobile robots.",
"title": ""
},
{
"docid": "001764b6037862def1e37fec85984293",
"text": "We present a basic technique to fill-in missing parts of a video sequence taken from a static camera. Two important cases are considered. The first case is concerned with the removal of non-stationary objects that occlude stationary background. We use a priority based spatio-temporal synthesis scheme for inpainting the stationary background. The second and more difficult case involves filling-in moving objects when they are partially occluded. For this, we propose a priority scheme to first inpaint the occluded moving objects and then fill-in the remaining area with stationary background using the method proposed for the first case. We use as input an optical-flow based mask, which tells if an undamaged pixel is moving or is stationary. The moving object is inpainted by copying patches from undamaged frames, and this copying is independent of the background of the moving object in either frame. This work has applications in a variety of different areas, including video special effects and restoration and enhancement of damaged videos. The examples shown in the paper illustrate these ideas.",
"title": ""
},
{
"docid": "7355bf66dac6e027c1d6b4c2631d8780",
"text": "Cannabidiol is a component of marijuana that does not activate cannabinoid receptors, but moderately inhibits the degradation of the endocannabinoid anandamide. We previously reported that an elevation of anandamide levels in cerebrospinal fluid inversely correlated to psychotic symptoms. Furthermore, enhanced anandamide signaling let to a lower transition rate from initial prodromal states into frank psychosis as well as postponed transition. In our translational approach, we performed a double-blind, randomized clinical trial of cannabidiol vs amisulpride, a potent antipsychotic, in acute schizophrenia to evaluate the clinical relevance of our initial findings. Either treatment was safe and led to significant clinical improvement, but cannabidiol displayed a markedly superior side-effect profile. Moreover, cannabidiol treatment was accompanied by a significant increase in serum anandamide levels, which was significantly associated with clinical improvement. The results suggest that inhibition of anandamide deactivation may contribute to the antipsychotic effects of cannabidiol potentially representing a completely new mechanism in the treatment of schizophrenia.",
"title": ""
},
{
"docid": "ea8c0a7516b180a6a542a852b62e6497",
"text": "Genetic growth curves of boars in a test station were predicted on daily weight records collected by automated weighing scales. The data contained 121 865 observations from 1477 Norwegian Landrace boars and 108 589 observations from 1300 Norwegian Duroc boars. Random regression models using Legendre polynomials up to second order for weight at different ages were compared for best predicting ability and Bayesian information criterion (BIC) for both breeds. The model with second-order polynomials had best predictive ability and BIC. The heritability for weight, based on this model, was found to vary along the growth trajectory between 0.32-0.35 for Duroc and 0.17-0.25 for Landrace. By varying test length possibility to use shorter test time and pre-selection was tested. Test length was varied and compared with average termination at 100 kg, termination of the test at 90 kg gives, e.g. 2% reduction in accuracy of estimated breeding values (EBV) for both breeds and termination at 80 kg gives 5% reduction in accuracy of EBVs for Landrace and 3% for Duroc. A shorter test period can decrease test costs per boar, but also gives possibilities to increase selection intensity as there will be room for testing more boars.",
"title": ""
},
{
"docid": "44d8cb42bd4c2184dc226cac3adfa901",
"text": "Several descriptions of redundancy are presented in the literature , often from widely dif ferent perspectives . Therefore , a discussion of these various definitions and the salient points would be appropriate . In particular , any definition and redundancy needs to cover the following issues ; the dif ference between multiple solutions and an infinite number of solutions ; degenerate solutions to inverse kinematics ; task redundancy ; and the distinction between non-redundant , redundant and highly redundant manipulators .",
"title": ""
},
{
"docid": "917458b0c9e26b878676d1edf542b5ea",
"text": "The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it. This paper reviews the important results from these studies and also presents new ideas and alternative interpretations which further explain the success of the EnKF. In addition to providing the theoretical framework needed for using the EnKF, there is also a focus on the algorithmic formulation and optimal numerical implementation. A program listing is given for some of the key subroutines. The paper also touches upon specific issues such as the use of nonlinear measurements, in situ profiles of temperature and salinity, and data which are available with high frequency in time. An ensemble based optimal interpolation (EnOI) scheme is presented as a cost-effective approach which may serve as an alternative to the EnKF in some applications. A fairly extensive discussion is devoted to the use of time correlated model errors and the estimation of model bias.",
"title": ""
},
{
"docid": "9c799b4d771c724969be7b392697ebee",
"text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.",
"title": ""
},
{
"docid": "712335f6cbe0d00fce07d6bb6d600759",
"text": "Narrowband Internet of Things (NB-IoT) is a new radio access technology, recently standardized in 3GPP to enable support for IoT devices. NB-IoT offers a range of flexible deployment options and provides improved coverage and support for a massive number of devices within a cell. In this paper, we provide a detailed evaluation of the coverage performance of NBIoT and show that it achieves a coverage enhancement of up to 20 dB when compared with existing LTE technology.",
"title": ""
},
{
"docid": "06b1a00a97eea61ada0d92469254ddbd",
"text": "We propose a model for clustering data with spatiotemporal intervals. This model is used to effectively evaluate clusters of spatiotemporal interval data. A new energy function is used to measure similarity and balance between clusters in spatial and temporal dimensions. We employ as a case study a large collection of parking data from a real CBD area. The proposed model is applied to existing traditional algorithms to address spatiotemporal interval data clustering problem. Results from traditional clustering algorithms are compared and analysed using the proposed energy function.",
"title": ""
},
{
"docid": "802f77b4e2b8c8cdfb68f80fe31d7494",
"text": "In this article, we use three clustering methods (K-means, self-organizing map, and fuzzy K-means) to find properly graded stock market brokerage commission rates based on the 3-month long total trades of two different transaction modes (representative assisted and online trading system). Stock traders for both modes are classified in terms of the amount of the total trade as well as the amount of trade of each transaction mode, respectively. Results of our empirical analysis indicate that fuzzy K-means cluster analysis is the most robust approach for segmentation of customers of both transaction modes. We then propose a decision tree based rule to classify three groups of customers and suggest different brokerage commission rates of 0.4, 0.45, and 0.5% for representative assisted mode and 0.06, 0.1, and 0.18% for online trading system, respectively. q 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a57bdfa9c48a76d704258f96874ea700",
"text": "BACKGROUND\nPrevious state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text \"feature engineering\" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word \"embeddings\".\n\n\nOBJECTIVES\n(i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets.\n\n\nMETHODS\nTwo deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models.\n\n\nRESULTS\nWe have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset.\n\n\nCONCLUSIONS\nWe present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary.",
"title": ""
},
{
"docid": "5c512bf8cb37f3937b27855e03e111d6",
"text": "Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized computation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares. We also design novel colliding hashes for symmetric tensors to further save time in computing the sketches. We then combine these sketching ideas with existing whitening and tensor power iterative techniques to obtain the fastest algorithm on both sparse and dense tensors. The quality of approximation under our method does not depend on properties such as sparsity, uniformity of elements, etc. We apply the method for topic modeling and obtain competitive results.",
"title": ""
},
{
"docid": "3dd1755e44ecefbc1cc12ad172cec9dd",
"text": "s from the hardware actually present. This abstraction happens between the hardware and the software layer of a system, as indicated in Fig. 2.8; which shows two virtual machines mapped to the same hardware and encapsulated by individual containers. Note that virtual machines (VM) can run distinct operating systems atop the same hardware. Virtualization typically simplifies the administration of a system, and it can help increase system security; a crash of a virtual machine has no impact on other virtual machines. Technically a virtual machine is nothing but a file. Virtualization is implemented using a Virtual Machine Monitor or Hypervisor which takes care of resource mapping and management. We finally mention another precursor to cloud computing, which can be observed during the past 25 years as a major paradigm shift in software development, namely a departure from large and monolithic software applications to light-weight services which ultimately can be composed and orchestrated into more powerful services that finally carry entire application scenarios. Service-orientation especially in the form of service calls to an open application programming interface (API) that can be contacted over the Web as long as the correct input parameters are delivered have not only become very popular, but are also exploited these days in numerous ways, for the particular reason of giving users an increased level of functionality from a single source. A benefit of the service approach to software development has so far been the fact that platform development especially on the Web has received a high amount of attention in recent years. Yet it has also contributed to the fact that services which a provider delivers behind the scenes to some well-defined interface can be enhanced and modified and even permanently corrected and updated without the user even noticing, and it has triggered the development of the SOA (Service-Oriented Architecture) concept that was mentioned in the previous section. Operating System App. 1 App. 2 Virtualization Layer Operating System App. 3 App. 4 Hardware VM Container VM Container Fig. 2.8 Virtualized infrastructure 2.2 Virtualization and Cloud Computing 73",
"title": ""
},
{
"docid": "e4817273d4601c309a0a5577fafb651f",
"text": "This study investigated performance and physiology to understand pacing strategies in elite Paralympic athletes with cerebral palsy (CP). Six Paralympic athletes with CP and 13 able-bodied (AB) athletes performed two trials of eight sets of 10 shuttles (total 1600m). One trial was distance-deceived (DEC, 1000 m + 600 m) one trial was nondeceived (N-DEC, 1600 m). Time (s), heart rate (HR, bpm), ratings of perceived exertion (RPE, units), and electromyography of five bilateral muscles (EMG) were recorded for each set of both trials. The CP group ran slower than the AB group, and pacing differences were seen in the CP DEC trial, presenting as a flat pacing profile over the trial (P < 0.05). HR was higher and RPE was lower in the CP group in both trials (P < 0.05). EMG showed small differences between groups, sides, and trials. The present study provides evidence for a possible pacing strategy underlying exercise performance and fatigue in CP. The results of this study show (1) underperformance of the CP group, and (2) altered pacing strategy utilization in the CP group. We proposed that even at high levels of performance, the residual effects of CP may negatively affect performance through selection of conservative pacing strategies during exercise.",
"title": ""
},
{
"docid": "ad8ebb2f4ec3350a2486a63019557633",
"text": "Building a persona-based conversation agent is challenging owing to the lack of large amounts of speaker-specific conversation data for model training. This paper addresses the problem by proposing a multi-task learning approach to training neural conversation models that leverages both conversation data across speakers and other types of data pertaining to the speaker and speaker roles to be modeled. Experiments show that our approach leads to significant improvements over baseline model quality, generating responses that capture more precisely speakers’ traits and speaking styles. The model offers the benefits of being algorithmically simple and easy to implement, and not relying on large quantities of data representing specific individual speakers.",
"title": ""
},
{
"docid": "b7e78ca489cdfb8efad03961247e12f2",
"text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling",
"title": ""
},
{
"docid": "457efc3b22084fd7221637bd574ff075",
"text": "Group-based trajectory models are used to investigate population differences in the developmental courses of behaviors or outcomes . This article demonstrates a new Stata command, traj, for fitting to longitudinal data finite (discrete) mixture models designed to identify clusters of individuals following similar progressions of some behavior or outcome over age or time. Censored normal, Poisson, zero-inflated Poisson, and Bernoulli distributions are supported. Applications to psychometric scale data, count data, and a dichotomous prevalence measure are illustrated. Introduction A developmental trajectory measures the course of an outcome over age or time. The study of developmental trajectories is a central theme of developmental and abnormal psychology and psychiatry, of life course studies in sociology and criminology, of physical and biological outcomes in medicine and gerontology. A wide variety of statistical methods are used to study these phenomena. This article demonstrates a Stata plugin for estimating group-based trajectory models. The Stata program we demonstrate adapts a well-established SAS-based procedure for estimating group-based trajectory model (Jones, Nagin, and Roeder, 2001; Jones and Nagin, 2007) to the Stata platform. Group-based trajectory modeling is a specialized form of finite mixture modeling. The method is designed identify groups of individuals following similarly developmental trajectories. For a recent review of applications of group-based trajectory modeling see Nagin and Odgers (2010) and for an extended discussion of the method, including technical details, see Nagin (2005). A Brief Overview of Group-Based Trajectory Modeling Using finite mixtures of suitably defined probability distributions, the group-based approach for modeling developmental trajectories is intended to provide a flexible and easily applied method for identifying distinctive clusters of individual trajectories within the population and for profiling the characteristics of individuals within the clusters. Thus, whereas the hierarchical and latent curve methodologies model population variability in growth with multivariate continuous distribution functions, the group-based approach utilizes a multinomial modeling strategy. Technically, the group-based trajectory model is an example of a finite mixture model. Maximum likelihood is used for the estimation of the model parameters. The maximization is performed using a general quasi-Newton procedure (Dennis, Gay, and Welsch 1981; Dennis and Mei 1979). The fundamental concept of interest is the distribution of outcomes conditional on age (or time); that is, the distribution of outcome trajectories denoted by ), | ( i i Age Y P where the random vector Yi represents individual i’s longitudinal sequence of behavioral outcomes and the vector Agei represents individual i’s age when each of those measurements is recorded. The group-based trajectory model assumes that the population distribution of trajectories arises from a finite mixture of unknown order J. The likelihood for each individual i, conditional on the number of groups J, may be written as 1 Trajectories can also be defined by time (e.g., time from treatment). 1 ( | ) ( | , ; ) (1), J j j i i i i j P Y Age P Y Age j where is the probability of membership in group j, and the conditional distribution of Yi given membership in j is indexed by the unknown parameter vector which among other things determines the shape of the group-specific trajectory. The trajectory is modeled with up to a 5 order polynomial function of age (or time). For given j, conditional independence is assumed for the sequential realizations of the elements of Yi , yit, over the T periods of measurement. Thus, we may write T i t j it it j i i j age y p j Age Y P ), 2 ( ) ; , | ( ) ; , | ( where p(.) is the distribution of yit conditional on membership in group j and the age of individual i at time t. 2 The software provides three alternative specifications of p(.): the censored normal distribution also known as the Tobit model, the zero-inflated Poisson distribution, and the binary logit distribution. The censored normal distribution is designed for the analysis of repeatedly measured, (approximately) continuous scales which may be censored by either a scale minimum or maximum or both (e.g., longitudinal data on a scale of depression symptoms). A special case is a scale or other outcome variable with no minimum or maximum. The zero-inflated Poisson distribution is designed for the analysis of longitudinal count data (e.g., arrests by age) and binary logit distribution for the analysis of longitudinal data on a dichotomous outcome variable (e.g., whether hospitalized in year t or not). The model also provides capacity for analyzing the effect of time-stable covariate effects on probability of group membership and the effect of time dependent covariates on the trajectory itself. Let i x denote a vector of time stable covariates thought to be associated with probability of trajectory group membership. Effects of time-stable covariates are modeled with a generalized logit function where without loss of generality :",
"title": ""
}
] |
scidocsrr
|
caf6ec9906c41d5661636bb12a137d38
|
Combining Self Training and Active Learning for Video Segmentation
|
[
{
"docid": "d4aaea0107cbebd7896f4cb57fa39c05",
"text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs",
"title": ""
}
] |
[
{
"docid": "4b75c7158f6c20542385d08eca9bddb3",
"text": "PURPOSE\nExtraarticular manifestations of the joint hypermobility syndrome may include the peripheral nervous system. The purpose of this study was to investigate autonomic function in patients with this syndrome.\n\n\nMETHODS\nForty-eight patients with the joint hypermobility syndrome who fulfilled the 1998 Brighton criteria and 30 healthy control subjects answered a clinical questionnaire designed to evaluate the frequency of complaints related to the autonomic nervous system. Next, 27 patients and 21 controls underwent autonomic evaluation: orthostatic testing, cardiovascular vagal and sympathetic functions, catecholamine levels, and adrenoreceptor responsiveness.\n\n\nRESULTS\nSymptoms related to the autonomic nervous system, such as syncope and presyncope, palpitations, chest discomfort, fatigue, and heat intolerance, were significantly more common among patients. Orthostatic hypotension, postural orthostatic tachycardia syndrome, and uncategorized orthostatic intolerance were found in 78% (21/27) of patients compared with in 10% (2/21) of controls. Patients with the syndrome had a greater mean (+/- SD) drop in systolic blood pressure during hyperventilation than did controls (-11 +/- 7 mm Hg vs. -5 +/- 5 mm Hg, P = 0.02) and a greater increase in systolic blood pressure after a cold pressor test (19 +/- 10 mm Hg vs. 11 +/- 13 mm Hg, P = 0.06). Patients with the syndrome also had evidence of alpha-adrenergic (as assessed by administration of phenylephrine) and beta-adrenergic hyperresponsiveness (as assessed by administration of isoproterenol).\n\n\nCONCLUSION\nThe autonomic nervous system-related symptoms of the patients have a pathophysiological basis, which suggests that dysautonomia is an extraarticular manifestation in the joint hypermobility syndrome.",
"title": ""
},
{
"docid": "2a18cc344c6874f02d3aacdf9ba7effb",
"text": "This paper describes the best performing system for the shared task on Named Entity Recognition (NER) on code-switched data for the language pair Spanish-English (ENG-SPA). We introduce a gated neural architecture for the NER task. Our final model achieves an F1 score of 63.76%, outperforming the baseline by 10%.",
"title": ""
},
{
"docid": "1f7fa34fd7e0f4fd7ff9e8bba2a78e3c",
"text": "Today many multi-national companies or organizations are adopting the use of automation. Automation means replacing the human by intelligent robots or machines which are capable to work as human (may be better than human). Artificial intelligence is a way of making machines, robots or software to think like human. As the concept of artificial intelligence is use in robotics, it is necessary to understand the basic functions which are required for robots to think and work like human. These functions are planning, acting, monitoring, perceiving and goal reasoning. These functions help robots to develop its skills and implement it. Since robotics is a rapidly growing field from last decade, it is important to learn and improve the basic functionality of robots and make it more useful and user-friendly.",
"title": ""
},
{
"docid": "2aaa1abe6a56ea9a925c918e30afd995",
"text": "DevOps is a new tendency in business and information technology alignment. The purpose of DevOps is bridging the gap between the development and operations. Several sources claim that DevOps is a new style of work. Many successful DevOps introduction attempts and also many problems in adoption of this style of work have been discussed. This paper reports on research results in facilitating the adoption of DevOps in small enterprises. The DevOps adoption method and several related to it artefacts are proposed. The proposed method has been tested in a national branch of an international company with an internal IT development team.",
"title": ""
},
{
"docid": "7f5ff39232cd491e648d40b070e0709c",
"text": "Synthesizing terrain or adding detail to terrains manually is a long and tedious process. With procedural synthesis methods this process is faster but more difficult to control. This paper presents a new technique of terrain synthesis that uses an existing terrain to synthesize new terrain. To do this we use multi-resolution analysis to extract the high-resolution details from existing models and apply them to increase the resolution of terrain. Our synthesized terrains are more heterogeneous than procedural results, are superior to terrains created by texture transfer, and retain the large-scale characteristics of the original terrain.",
"title": ""
},
{
"docid": "456b7ad01115d9bc04ca378f1eb6d7f2",
"text": "Article history: Received 13 October 2007 Received in revised form 12 June 2008 Accepted 31 July 2008",
"title": ""
},
{
"docid": "486dae23f5a7b19cf8c20fab60de6b0f",
"text": "Histopathological alterations induced by paraquat in the digestive gland of the freshwater snail Lymnaea luteola were investigated. Samples were collected from the Kondakarla lake (Visakhapatnam, Andhra Pradesh, India), where agricultural activities are widespread. Acute toxicity of series of concentration of paraquat to Lymnaea luteola was determined by recording snail mortality of 24, 48, 72 and 96 hrs exposures. The Lc50 value based on probit analysis was found to be 0.073 ml/L for 96 hrs of exposure to the herbicide. Results obtained shown that there were no mortality of snail either in control and those exposed to 0.0196 ml/L paraquat throughout the 96 hrs 100% mortality was recorded with 48hrs on exposed to 0.790 ppm concentration of stock solution of paraquat. At various concentrations paraquat causes significant dose dependent histopathological changes in the digestive gland of L.luteola. The histopathological examinations revealed the following changes: amebocytes infiltrations, the lumen of digestive gland tubule was shrunken; degeneration of cells, secretory cells became irregular, necrosis of cells and atrophy in the connective tissue of digestive gland.",
"title": ""
},
{
"docid": "437be2c4c7a28aa1b84e32632c2eb253",
"text": "Machine reading comprehension is a task to model relationship between passage and query. In terms of deep learning framework, most of stateof-the-art models simply concatenate word and character level representations, which has been shown suboptimal for the concerned task. In this paper, we empirically explore different integration strategies of word and character embeddings and propose a character-augmented reader which attends character-level representation to augment word embedding with a short list to improve word representations, especially for rare words. Experimental results show that the proposed approach helps the baseline model significantly outperform state-of-the-art baselines on various public benchmarks.",
"title": ""
},
{
"docid": "0b4871da012b1c4370833c115fc26c5d",
"text": "Video recognition usually requires a large amount of training samples, which are expensive to be collected. An alternative and cheap solution is to draw from the large-scale images and videos from the Web. With modern search engines, the top ranked images or videos are usually highly correlated to the query, implying the potential to harvest the labeling-free Web images and videos for video recognition. However, there are two key difficulties that prevent us from using the Web data directly. First, they are typically noisy and may be from a completely different domain from that of users’ interest (e.g. cartoons). Second, Web videos are usually untrimmed and very lengthy, where some query-relevant frames are often hidden in between the irrelevant ones. A question thus naturally arises: to what extent can such noisy Web images and videos be utilized for labeling-free video recognition? In this paper, we propose a novel approach to mutually voting for relevant Web images and video frames, where two forces are balanced, i.e. aggressive matching and passive video frame selection. We validate our approach on three large-scale video recognition datasets.",
"title": ""
},
{
"docid": "faf770aba28d13e07573b5bf65db1863",
"text": "In the emerging electronic environment, knowing how to create customercentered Web sites is of great importance. This paper reports two studies on user perceptions of Web sites. First, Kano’s model of quality was used in an exploratory investigation of customer quality expectations for a specific type of site (CNN.com). The quality model was then extended by treating broader site types/domains. The results showed that (1) customers’ quality expectations change over time, and thus no single quality checklist will be good for very long, (2) the Kano model can be used as a framework or method for identifying quality expectations and the time transition of quality factors, (3) customers in a Web domain do not regard all quality factors as equally important, and (4) the rankings of important quality factors dif fer from one Web domain to another, but certain factors were regarded as highly impor tant across all the domains studied.",
"title": ""
},
{
"docid": "f177b129e4a02fe42084563a469dc47d",
"text": "This paper proposes three design concepts for developing a crawling robot inspired by an inchworm, called the Omegabot. First, for locomotion, the robot strides by bending its body into an omega shape; anisotropic friction pads enable the robot to move forward using this simple motion. Second, the robot body is made of a single part but has two four-bar mechanisms and one spherical six-bar mechanism; the mechanisms are 2-D patterned into a single piece of composite and folded to become a robot body that weighs less than 1 g and that can crawl and steer. This design does not require the assembly of various mechanisms of the body structure, thereby simplifying the fabrication process. Third, a new concept for using a shape-memory alloy (SMA) coil-spring actuator is proposed; the coil spring is designed to have a large spring index and to work over a large pitch-angle range. This large-index-and-pitch SMA spring actuator cools faster and requires less energy, without compromising the amount of force and displacement that it can produce. Therefore, the frequency and the efficiency of the actuator are improved. A prototype was used to demonstrate that the inchworm-inspired, novel, small-scale, lightweight robot manufactured on a single piece of composite can crawl and steer.",
"title": ""
},
{
"docid": "1f77cd37ca852253b247e898260fd77e",
"text": "Unpaired image-to-image translation is the problem of mapping an image in the source domain to one in the target domain, without requiring corresponding image pairs. To ensure the translated images are realistically plausible, recent works, such as Cycle-GAN, demands this mapping to be invertible. While, this requirement demonstrates promising results when the domains are unimodal, its performance is unpredictable in a multi-modal scenario such as in an image segmentation task. This is because, invertibility does not necessarily enforce semantic correctness. To this end, we present a semantically-consistent GAN framework, dubbed Sem-GAN, in which the semantics are defined by the class identities of image segments in the source domain as produced by a semantic segmentation algorithm. Our proposed framework includes consistency constraints on the translation task that, together with the GAN loss and the cycle-constraints, enforces that the images when translated will inherit the appearances of the target domain, while (approximately) maintaining their identities from the source domain. We present experiments on several image-to-image translation tasks and demonstrate that Sem-GAN improves the quality of the translated images significantly, sometimes by more than 20% on the FCN score. Further, we show that semantic segmentation models, trained with synthetic images translated via Sem-GAN, leads to significantly better segmentation results than other variants.",
"title": ""
},
{
"docid": "dc9a92313c58b5e688a3502b994e6d3a",
"text": "This paper explores the application of Activity-Based Costing and Activity-Based Management in ecommerce. The proposed application may lead to better firm performance of many companies in offering their products and services over the Internet. A case study of a fictitious Business-to-Customer (B2C) company is used to illustrate the proposed structured implementation procedure and effects of an Activity-Based Costing analysis. The analysis is performed by using matrixes in order to trace overhead. The Activity-Based Costing analysis is then used to demonstrate operational and strategic Activity-Based Management in e-commerce.",
"title": ""
},
{
"docid": "774bf4b0a2c8fe48607e020da2737041",
"text": "A class of three-dimensional planar arrays in substrate integrated waveguide (SIW) technology is proposed, designed and demonstrated with 8 × 16 elements at 35 GHz for millimeter-wave imaging radar system applications. Endfire element is generally chosen to ensure initial high gain and broadband characteristics for the array. Fermi-TSA (tapered slot antenna) structure is used as element to reduce the beamwidth. Corrugation is introduced to reduce the resulting antenna physical width without degradation of performance. The achieved measured gain in our demonstration is about 18.4 dBi. A taper shaped air gap in the center is created to reduce the coupling between two adjacent elements. An SIW H-to-E-plane vertical interconnect is proposed in this three-dimensional architecture and optimized to connect eight 1 × 16 planar array sheets to the 1 × 8 final network. The overall architecture is exclusively fabricated by the conventional PCB process. Thus, the developed SIW feeder leads to a significant reduction in both weight and cost, compared to the metallic waveguide-based counterpart. A complete antenna structure is designed and fabricated. The planar array ensures a gain of 27 dBi with low SLL of 26 dB and beamwidth as narrow as 5.15 degrees in the E-plane and 6.20 degrees in the 45°-plane.",
"title": ""
},
{
"docid": "e159ffe1f686e400b28d398127edfc5c",
"text": "In this paper, we present an in-vehicle computing system capable of localizing lane markings and communicating them to drivers. To the best of our knowledge, this is the first system that combines the Maximally Stable Extremal Region (MSER) technique with the Hough transform to detect and recognize lane markings (i.e., lines and pictograms). Our system begins by localizing the region of interest using the MSER technique. A three-stage refinement computing algorithm is then introduced to enhance the results of MSER and to filter out undesirable information such as trees and vehicles. To achieve the requirements of real-time systems, the Progressive Probabilistic Hough Transform (PPHT) is used in the detection stage to detect line markings. Next, the recognition of the color and the form of line markings is performed; this it is based on the results of the application of the MSER to left and right line markings. The recognition of High-Occupancy Vehicle pictograms is performed using a new algorithm, based on the results of MSER regions. In the tracking stage, Kalman filter is used to track both ends of each detected line marking. Several experiments are conducted to show the efficiency of our system. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "66423bc00bb724d1d0c616397d898dd0",
"text": "Background\nThere is a growing trend for patients to seek the least invasive treatments with less risk of complications and downtime for facial rejuvenation. Thread embedding acupuncture has become popular as a minimally invasive treatment. However, there is little clinical evidence in the literature regarding its effects.\n\n\nMethods\nThis single-arm, prospective, open-label study recruited participants who were women aged 40-59 years, with Glogau photoaging scale III-IV. Fourteen participants received thread embedding acupuncture one time and were measured before and after 1 week from the procedure. The primary outcome was a jowl to subnasale vertical distance. The secondary outcomes were facial wrinkle distances, global esthetic improvement scale, Alexiades-Armenakas laxity scale, and patient-oriented self-assessment scale.\n\n\nResults\nFourteen participants underwent thread embedding acupuncture alone, and 12 participants revisited for follow-up outcome measures. For the primary outcome measure, both jowls were elevated in vertical height by 1.87 mm (left) and 1.43 mm (right). Distances of both melolabial and nasolabial folds showed significant improvement. In the Alexiades-Armenakas laxity scale, each evaluator evaluated for four and nine participants by 0.5 grades improved. In the global aesthetic improvement scale, improvement was graded as 1 and 2 in nine and five cases, respectively. The most common adverse events were mild bruising, swelling, and pain. However, adverse events occurred, although mostly minor and of short duration.\n\n\nConclusion\nIn this study, thread embedding acupuncture showed clinical potential for facial wrinkles and laxity. However, further large-scale trials with a controlled design and objective measurements are needed.",
"title": ""
},
{
"docid": "dcd116e601c9155d60364c19a1f0dfb7",
"text": "The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure was developed to aid clinicians with a dimensional assessment of psychopathology; however, this measure resembles a screening tool for several symptomatic domains. The objective of the current study was to examine the basic parameters of sensitivity, specificity, positive and negative predictive power of the measure as a screening tool. One hundred and fifty patients in a correctional community center filled out the measure prior to a psychiatric evaluation, including the Mini International Neuropsychiatric Interview screen. The above parameters were calculated for the domains of depression, mania, anxiety, and psychosis. The results showed that the sensitivity and positive predictive power of the studied domains was poor because of a high rate of false positive answers on the measure. However, when the lowest threshold on the Cross-Cutting Symptom Measure was used, the sensitivity of the anxiety and psychosis domains and the negative predictive values for mania, anxiety and psychosis were good. In conclusion, while it is foreseeable that some clinicians may use the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a screening tool, it should not be relied on to identify positive findings. It functioned well in the negative prediction of mania, anxiety and psychosis symptoms.",
"title": ""
},
{
"docid": "81f82ecbc43653566319c7e04f098aeb",
"text": "Social microblogs such as Twitter and Weibo are experiencing an explosive growth with billions of global users sharing their daily observations and thoughts. Beyond public interests (e.g., sports, music), microblogs can provide highly detailed information for those interested in public health, homeland security, and financial analysis. However, the language used in Twitter is heavily informal, ungrammatical, and dynamic. Existing data mining algorithms require extensive manually labeling to build and maintain a supervised system. This paper presents STED, a semi-supervised system that helps users to automatically detect and interactively visualize events of a targeted type from twitter, such as crimes, civil unrests, and disease outbreaks. Our model first applies transfer learning and label propagation to automatically generate labeled data, then learns a customized text classifier based on mini-clustering, and finally applies fast spatial scan statistics to estimate the locations of events. We demonstrate STED’s usage and benefits using twitter data collected from Latin America countries, and show how our system helps to detect and track example events such as civil unrests and crimes.",
"title": ""
},
{
"docid": "185eef07170ace88d3d66593d3c5bd1b",
"text": "A compact triple-band H-shaped slot antenna fed by microstrip coupling is proposed. Four resonant modes are excited, including a monopole mode, a slot mode, and their higher-order modes, to cover GPS (1.575 GHz) and Wi-Fi (2.4-2.485 GHz and 5.15-5.85 GHz), respectively. Sensitivity study of the slot geometry upon the resonant modes have been conducted. The measured gains at these four resonant frequencies are 0.2 dBi, 3.5 dBi, 2.37 dBi, and 3.7 dBi, respectively, and the total efficiencies are -2.5 dB, -1.07 dB, -3.06 dB, and -2.7 dB, respectively. The size of this slot antenna is only 0.24λ0×0.034λ0, where λ0 is the free-space wavelength at 1.575 GHz, hence is suitable to install on notebook PC's and handheld devices.",
"title": ""
},
{
"docid": "ea308cdcedd9261fb9871cf84899b63f",
"text": "Purpose To identify and discuss the issues and success factors surrounding biometrics, especially in the context of user authentication and controls in the banking sector, using a case study. Design/methodology/approach The literature survey and analysis of the security models of the present information systems and biometric technologies in the banking sector provide the theoretical and practical background for this work. The impact of adopting biometric solutions in banks was analysed by considering the various issues and challenges from technological, managerial, social and ethical angles. These explorations led to identifying the success factors that serve as possible guidelines for a viable implementation of a biometric enabled authentication system in banking organisations, in particular for a major bank in New Zealand. Findings As the level of security breaches and transaction frauds increase day by day, the need for highly secure identification and personal verification information systems is becoming extremely important especially in the banking and finance sector. Biometric technology appeals to many banking organisations as a near perfect solution to such security threats. Though biometric technology has gained traction in areas like healthcare and criminology, its application in banking security is still in its infancy. Due to the close association of biometrics to human, physical and behavioural aspects, such technologies pose a multitude of social, ethical and managerial challenges. The key success factors proposed through the case study served as a guideline for a biometric enabled security project called Bio Sec, which is envisaged in a large banking organisation in New Zealand. This pilot study reveals that more than coping with the technology issues of gelling biometrics into the existing information systems, formulating a viable security plan that addresses user privacy fears, human tolerance levels, organisational change and legal issues is of prime importance. Originality/value Though biometric systems are successfully adopted in areas such as immigration control and criminology, there is a paucity of their implementation and research pertaining to banking environments. Not all banks venture into biometric solutions to enhance their security systems due to their socio technological issues. This paper fulfils the need for a guideline to identify the various issues and success factors for a viable biometric implementation in a bank’s access control system. This work is only a starting point for academics to conduct more research in the application of biometrics in the various facets of banking businesses.",
"title": ""
}
] |
scidocsrr
|
7a66359b7fd45cc847d2f450c94d0a22
|
DOM tree based approach for Web content extraction
|
[
{
"docid": "37b5a10646e741f8b7430a2037f6a472",
"text": "Web pages often contain clutter (such as pop-up ads, unnecessary images and extraneous links) around the body of an article that distracts a user from actual content. Extraction of \"useful and relevant\" content from web pages has many applications, including cell phone and PDA browsing, speech rendering for the visually impaired, and text summarization. Most approaches to removing clutter or making content more readable involve changing font size or removing HTML and data components such as images, which takes away from a webpage's inherent look and feel. Unlike \"Content Reformatting\", which aims to reproduce the entire webpage in a more convenient form, our solution directly addresses \"Content Extraction\". We have developed a framework that employs easily extensible set of techniques that incorporate advantages of previous work on content extraction. Our key insight is to work with the DOM trees, rather than with raw HTML markup. We have implemented our approach in a publicly available Web proxy to extract content from HTML web pages.",
"title": ""
},
{
"docid": "ef14d26a613cec20b7ea36e24c197da1",
"text": "In this paper, we propose a new approach to discover informative contents from a set of tabular documents (or Web pages) of a Web site. Our system, InfoDiscoverer, first partitions a page into several content blocks according to HTML tag <TABLE> in a Web page. Based on the occurrence of the features (terms) in the set of pages, it calculates entropy value of each feature. According to the entropy value of each feature in a content block, the entropy value of the block is defined. By analyzing the information measure, we propose a method to dynamically select the entropy-threshold that partitions blocks into either informative or redundant. Informative content blocks are distinguished parts of the page, whereas redundant content blocks are common parts. Based on the answer set generated from 13 manually tagged news Web sites with a total of 26,518 Web pages, experiments show that both recall and precision rates are greater than 0.956. That is, using the approach, informative blocks (news articles) of these sites can be automatically separated from semantically redundant contents such as advertisements, banners, navigation panels, news categories, etc. By adopting InfoDiscoverer as the preprocessor of information retrieval and extraction applications, the retrieval and extracting precision will be increased, and the indexing size and extracting complexity will also be reduced.",
"title": ""
}
] |
[
{
"docid": "cb95c63a4c3c350253416a22e347ce46",
"text": "In recent times, with the increasing interest in conversational agents for smart homes, task-oriented dialog systems are being actively researched. However, most of these studies are focused on the individual modules of such a system, and there is an evident lack of research on a dialog framework that can integrate and manage the entire dialog system. Therefore, in this study, we propose a framework that enables the user to effectively develop an intelligent dialog system. The proposed framework ontologically expresses the knowledge required for the task-oriented dialog system's process and can build a dialog system by editing the dialog knowledge. In addition, the framework provides a module router that can indirectly run externally developed modules. Further, it enables a more intelligent conversation by providing a hierarchical argument structure (HAS) to manage the various argument representations included in natural language sentences. To verify the practicality of the framework, an experiment was conducted in which developers without any previous experience in developing a dialog system developed task-oriented dialog systems using the proposed framework. The experimental results show that even beginner dialog system developers can develop a high-level task-oriented dialog system.",
"title": ""
},
{
"docid": "3cf7fc89e6a9b7295079dd74014f166b",
"text": "BACKGROUND\nHigh-resolution MRI has been shown to be capable of identifying plaque constituents, such as the necrotic core and intraplaque hemorrhage, in human carotid atherosclerosis. The purpose of this study was to evaluate differential contrast-weighted images, specifically a multispectral MR technique, to improve the accuracy of identifying the lipid-rich necrotic core and acute intraplaque hemorrhage in vivo.\n\n\nMETHODS AND RESULTS\nEighteen patients scheduled for carotid endarterectomy underwent a preoperative carotid MRI examination in a 1.5-T GE Signa scanner using a protocol that generated 4 contrast weightings (T1, T2, proton density, and 3D time of flight). MR images of the vessel wall were examined for the presence of a lipid-rich necrotic core and/or intraplaque hemorrhage. Ninety cross sections were compared with matched histological sections of the excised specimen in a double-blinded fashion. Overall accuracy (95% CI) of multispectral MRI was 87% (80% to 94%), sensitivity was 85% (78% to 92%), and specificity was 92% (86% to 98%). There was good agreement between MRI and histological findings, with a value of kappa=0.69 (0.53 to 0.85).\n\n\nCONCLUSIONS\nMultispectral MRI can identify the lipid-rich necrotic core in human carotid atherosclerosis in vivo with high sensitivity and specificity. This MRI technique provides a noninvasive tool to study the pathogenesis and natural history of carotid atherosclerosis. Furthermore, it will permit a direct assessment of the effect of pharmacological therapy, such as aggressive lipid lowering, on plaque lipid composition.",
"title": ""
},
{
"docid": "fa2c3c8946ebb97e119ba25cab52ff5c",
"text": "The digital era arrives with a whole set of disruptive technologies that creates both risk and opportunity for open sources analysis. Although the sheer quantity of online conversations makes social media a huge source of information, their analysis is still a challenging task and many of traditional methods and research methodologies for data mining are not fit for purpose. Social data mining revolves around subjective content analysis, which deals with the computational processing of texts conveying people's evaluations, beliefs, attitudes and emotions. Opinion mining and sentiment analysis are the main paradigm of social media exploration and both concepts are often interchangeable. This paper investigates the use of appraisal categories to explore data gleaned for social media, going beyond the limitations of traditional sentiment and opinion-oriented approaches. Categories of appraisal are grounded on cognitive foundations of the appraisal theory, according to which people's emotional response are based on their own evaluative judgments or appraisals of situations, events or objects. A formal model is developed to describe and explain the way language is used in the cyberspace to evaluate, express mood and subjective states, construct personal standpoints and manage interpersonal interactions and relationships. A general processing framework is implemented to illustrate how the model is used to analyze a collection of tweets related to extremist attitudes.",
"title": ""
},
{
"docid": "964f4f8c14432153d6001d961a1b5294",
"text": "Although there are numerous search engines in the Web environment, no one could claim producing reliable results in all conditions. This problem is becoming more serious considering the exponential growth of the number of Web resources. In the response to these challenges, the meta-search engines are introduced to enhance the search process by devoting some outstanding search engines as their information resources. In recent years, some approaches are proposed to handle the result combination problem which is the fundamental problem in the meta-search environment. In this paper, a new merging/re-ranking method is introduced which uses the characteristics of the Web co-citation graph that is constructed from search engines and returned lists. The information extracted from the co-citation graph, is combined and enriched by the userspsila click-through data as their implicit feedback in an adaptive framework. Experimental results show a noticeable improvement against the basic method as well as some well-known meta-search engines.",
"title": ""
},
{
"docid": "fce754c728d17319bae7ebe8f532dfe1",
"text": "As previous OS abstractions and structures fail to explicitly consider the separation between resource users an d providers, the shift toward server-side computing poses se rious challenges to OS structures, which is aggravated by the increasing many-core scale and workload diversity. This paper presents the horizontal OS model. We propose a new OS abstraction—subOS—an independent OS instance owning physical resources that can be created, destroyed, a nd resized swiftly. We horizontally decompose the OS into the s upervisor for the resource provider and several subOSes for r esource users. The supervisor discovers, monitors, and prov isions resources for subOSes, while each subOS independentl y runs applications. We confine state sharing among subOSes, but allow on-demand state sharing if necessary. We present the first implementation—RainForest, which supports unmodified Linux applications binaries. Our comprehensive evaluations using six benchmark suites quantit atively show RainForest outperforms Linux with three differ ent kernels, LXC, and XEN. The RainForest source code is soon available.",
"title": ""
},
{
"docid": "f7d023abf0f651177497ae38d8494efc",
"text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.",
"title": ""
},
{
"docid": "b3f423e513c543ecc9fe7003ff9880ea",
"text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.",
"title": ""
},
{
"docid": "d79117efb3d77cab5a245648b295fccf",
"text": "We analyze a jump linear Markov system being stabilized using a linear controller. We consider the case when the Markov state is associated with the probability distribution of a measured variable. We assume that the Markov state is not known, but rather is being estimated based on the observations of the variable. We present conditions for the stability of such a system and also solve the optimal LQR control problem for the case when the state estimate update uses only the last observation value. In particular we consider a suboptimal version of the causal Viterbi estimation algorithm and show that a separation property does not hold between the optimal control and the Markov state estimate. Some simple examples are also presented.",
"title": ""
},
{
"docid": "75617ed6450606c8019bb2f5471ac358",
"text": "Depression is one of the most common mood disorders. Technology has the potential to assist in screening and treating people with depression by robustly modeling and tracking the complex behavioral cues associated with the disorder (e.g., speech, language, facial expressions, head movement, body language). Similarly, robust affect recognition is another challenge which stands to benefit from modeling such cues. The Audio/Visual Emotion Challenge (AVEC) aims toward understanding the two phenomena and modeling their correlation with observable cues across several modalities. In this paper, we use multimodal signal processing methodologies to address the two problems using data from human-computer interactions. We develop separate systems for predicting depression levels and affective dimensions, experimenting with several methods for combining the multimodal information. The proposed depression prediction system uses a feature selection approach based on audio, visual, and linguistic cues to predict depression scores for each session. Similarly, we use multiple systems trained on audio and visual cues to predict the affective dimensions in continuous-time. Our affect recognition system accounts for context during the frame-wise inference and performs a linear fusion of outcomes from the audio-visual systems. For both problems, our proposed systems outperform the video-feature based baseline systems. As part of this work, we analyze the role played by each modality in predicting the target variable and provide analytical insights.",
"title": ""
},
{
"docid": "c1942b141986fde3d9161383ba8d7949",
"text": "VideoWhiteboard is a prototype tool to support remote shared drawing activity. It provides a whiteboard-sized shared drawing space for collaborators who are located in remote sites. It allows each user to see the drawings and a shadow of the gestures of collaborators at the remote site. The development of VideoWhiteboard is based on empirical studies of collaborative drawing activity, including experiences in using the VideoDraw shared drawing prototype. VideoWhiteboard enables remote collaborators to work together much as if they were sharing a whiteboard, and in some ways allows them to work together even more closely than if they were in the same room.",
"title": ""
},
{
"docid": "0afde87c9fb4fb21c6bad3196ef433d0",
"text": "Blockchain and verifiable identities have a lot of potential in future distributed software applications e.g. smart cities, eHealth, autonomous vehicles, networks, etc. In this paper, we proposed a novel technique, namely VeidBlock, to generate verifiable identities by following a reliable authentication process. These entities are managed by using the concepts of blockchain ledger and distributed through an advance mechanism to protect them against tampering. All identities created using VeidBlock approach are verifiable and anonymous therefore it preserves user's privacy in verification and authentication phase. As a proof of concept, we implemented and tested the VeidBlock protocols by integrating it in a SDN based infrastructure. Analysis of the test results yield that all components successfully and autonomously performed initial authentication and locally verified all the identities of connected components.",
"title": ""
},
{
"docid": "56d0609fe4e68abbce27124dd5291033",
"text": "Existing works indicate that the absence of explicit discourse connectives makes it difficult to recognize implicit discourse relations. In this paper we attempt to overcome this difficulty for implicit relation recognition by automatically inserting discourse connectives between arguments with the use of a language model. Then we propose two algorithms to leverage the information of these predicted connectives. One is to use these predicted implicit connectives as additional features in a supervised model. The other is to perform implicit relation recognition based only on these predicted connectives. Results on Penn Discourse Treebank 2.0 show that predicted discourse connectives help implicit relation recognition and the first algorithm can achieve an absolute average f-score improvement of 3% over a state of the art baseline system.",
"title": ""
},
{
"docid": "8f2a4de3669b26af17cd127387769ad6",
"text": "This research provides the first empirical investigation of how approach and avoidance motives for engaging in sex in intimate relationships are associated with personal well-being and relationship quality. A 2-week daily experience study of college student dating couples tested specific predictions from the theoretical model and included both longitudinal and dyadic components. Whereas approach sex motives were positively associated with personal and interpersonal well-being, avoidance sex motives were negatively associated with well-being. Engaging in sex for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner s motives for sex were also associated with well-being. Implications for the conceptualization of sexuality in relationships along these two dimensions are discussed. Sexual interactions in young adulthood can be positive forces that bring partners closer and make them feel good about themselves and their relationships. In the National Health and Social Life Survey (NHSLS), 78% of participants in monogamous dating relationships reported being either extremely or very pleased with their sexual relationship (Laumann, Gagnon, Michael, & Michaels, 1994). For instance, when asked to rate specific feelings they experienced after engaging in sex, a majority of the participants reported positive feelings (i.e., ‘‘felt loved,’’ ‘‘thrilled,’’ ‘‘wanted,’’ or ‘‘taken care of ’’). More generally, feelings of satisfaction with the sexual aspects of an intimate relationship contribute to overall relationship satisfaction and stability over time (e.g., Sprecher, 2002; see review by Sprecher & Cate, 2004). In short, sexual interactions can be potent forces that sustain and enhance intimate relationships. For some individuals and under certain circumstances, however, sexual interactions can be anything but positive and rewarding. They may create emotional distress, personal discontent, and relationship conflict. For instance, in the NHSLS, a sizable minority of respondents in dating relationships indicated that sex with an exclusive partner made them feel ‘‘sad,’’ ‘‘anxious and worried,’’ ‘‘scared and afraid,’’ or ‘‘guilty’’ (Laumann et al., 1994). Negative reactions to sex may stem from such diverse sources as prior traumatic or coercive experiences in relationships, feeling at a power disadvantage in one s current relationship, or discrepancies in sexual desire between partners, to name a few (e.g., Davies, Katz, & Jackson, 1999; Muehlenhard & Schrag, 1991). The studies reported here were based on Emily A. Impett s dissertation. Preparation of this article was supported by a fellowship awarded to the first author from the Sexuality Research Fellowship Program of the Social Science Research Council with funds provided by the Ford Foundation. We thank Katie Bishop, Renee Delgado, and Laura Tsang for their assistance with data collection and Andrew Christensen, Terri Conley, Martie Haselton, and Linda Sax for comments on an earlier version of this manuscript. Correspondence should be addressed to Emily A. Impett, Center for Research on Gender and Sexuality, San Francisco State University, 2017 Mission Street #300, San Francisco, CA 94110, e-mail: eimpett@sfsu.edu. Personal Relationships, 12 (2005), 465–482. Printed in the United States of America. Copyright 2005 IARR. 1350-4126=05",
"title": ""
},
{
"docid": "72be75e973b6a843de71667566b44929",
"text": "We think that hand pose estimation technologies with a camera should be developed for character conversion systems from sign languages with a not so high performance terminal. Fingernail positions can be used for getting finger information which can’t be obtained from outline information. Therefore, we decided to construct a practical fingernail detection system. The previous fingernail detection method, using distribution density of strong nail-color pixels, was not good at removing some skin areas having gloss like finger side area. Therefore, we should use additional information to remove them. We thought that previous method didn’t use boundary information and this information would be available. Color continuity information is available for getting it. In this paper, therefore, we propose a new fingernail detection method using not only distribution density but also color continuity to improve accuracy. We investigated the relationship between wrist rotation angles and percentages of correct detection. The number of users was three. As a result, we confirmed that our proposed method raised accuracy compared with previous method and could detect only fingernails with at least 85% probability from -90 to 40 degrees and from 40 to 90 degrees. Therefore, we concluded that our proposed method was effective.",
"title": ""
},
{
"docid": "18d7b3f9f966f36af7ab6ceca1f5440c",
"text": "This letter presents a Si nanowire based tunneling field-effect transistor (TFET) using a CMOS-compatible vertical gate-all-around structure. By minimizing the thermal budget with low-temperature dopant-segregated silicidation for the source-side dopant activation, excellent TFET characteristics were obtained. We have demonstrated for the first time the lowest ever reported subthreshold swing (SS) of 30 mV/decade at room temperature. In addition, we reported a very convincing SS of 50 mV/decade for close to three decades of drain current. Moreover, our TFET device exhibits excellent characteristics without ambipolar behavior and with high Ion/Ioff ratio (105), as well as low Drain-Induced Barrier Lowering of 70 mV/V.",
"title": ""
},
{
"docid": "26ad79619be484ec239daf5b735ae5a4",
"text": "The placenta is a complex organ, playing multiple roles during fetal development. Very little is known about the association between placental morphological abnormalities and fetal physiology. In this work, we present an open sourced, computationally tractable deep learning pipeline to analyse placenta histology at the level of the cell. By utilising two deep convolutional neural network architectures and transfer learning, we can robustly localise and classify placental cells within five classes with an accuracy of 89%. Furthermore, we learn deep embeddings encoding phenotypic knowledge that is capable of both stratifying five distinct cell populations and learn intraclass phenotypic variance. We envisage that the automation of this pipeline to population scale studies of placenta histology has the potential to improve our understanding of basic cellular placental biology and its variations, particularly its role in predicting adverse birth outcomes.",
"title": ""
},
{
"docid": "34690f455f9e539b06006f30dd3e512b",
"text": "Disaster relief operations rely on the rapid deployment of wireless network architectures to provide emergency communications. Future emergency networks will consist typically of terrestrial, portable base stations and base stations on-board low altitude platforms (LAPs). The effectiveness of network deployment will depend on strategically chosen station positions. In this paper a method is presented for calculating the optimal proportion of the two station types and their optimal placement. Random scenarios and a real example from Hurricane Katrina are used for evaluation. The results confirm the strength of LAPs in terms of high bandwidth utilisation, achieved by their ability to cover wide areas, their portability and adaptability to height. When LAPs are utilized, the total required number of base stations to cover a desired area is generally lower. For large scale disasters in particular, this leads to shorter response times and the requirement of fewer resources. This goal can be achieved more easily if algorithms such as the one presented in this paper are used.",
"title": ""
},
{
"docid": "ff5d1ace34029619d79342e5fe63e0b7",
"text": "In this paper, Proposes SIW slot antenna backed with a cavity for 57-64 GHz frequency. This frequency is used for wireless communication applications. The proposed antenna is designed by using Rogers substrate with dielectric constant of 2.2, substrate thickness is 0.381 mm and the microstrip feed is used with the input impedance of 50ohms. The structure provides 5.2GHz impedance bandwidth with a range of 57.8 to 64 GHz and matches with VSWR 2:1. The values of reflection coefficient, VSWR, gain, transmission efficiency and radiation efficiency of proposed antenna at 60GHz are −17.32dB, 1.3318, 7.19dBi, 79.5% and 89.5%.",
"title": ""
},
{
"docid": "cebfc5224413c5acb7831cbf29ae5a8e",
"text": "Radio Frequency (RF) Energy Harvesting holds a pro mising future for generating a small amount of electrical power to drive partial circuits in wirelessly communicating electronics devices. Reducing power consumption has become a major challenge in wireless sensor networks. As a vital factor affecting system cost and lifetime, energy consumption in wireless sensor networks is an emerging and active res arch area. This chapter presents a practical approach for RF Energy harvesting and man agement of the harvested and available energy for wireless sensor networks using the Impro ved Energy Efficient Ant Based Routing Algorithm (IEEABR) as our proposed algorithm. The c hapter looks at measurement of the RF power density, calculation of the received power, s torage of the harvested power, and management of the power in wireless sensor networks . The routing uses IEEABR technique for energy management. Practical and real-time implemen tatio s of the RF Energy using PowercastTM harvesters and simulations using the ene rgy model of our Libelium Waspmote to verify the approach were performed. The chapter con cludes with performance analysis of the harvested energy, comparison of IEEABR and other tr aditional energy management techniques, while also looking at open research areas of energy harvesting and management for wireless sensor networks.",
"title": ""
},
{
"docid": "1564a94998151d52785dd0429b4ee77d",
"text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.",
"title": ""
}
] |
scidocsrr
|
3569c33942343532ad67adae1cf900b4
|
CORD: Energy-Efficient Reliable Bulk Data Dissemination in Sensor Networks
|
[
{
"docid": "a9550f5f2158f0519a66264b6a948c29",
"text": "In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminate information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called meta-data. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of four specific SPIN protocols: SPIN-PP and SPIN-EC, which are optimized for a point-to-point network, and SPIN-BC and SPIN-RL, which are optimized for a broadcast network. Comparing the SPIN protocols to other possible approaches, we find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches in a point-to-point network and 80% more data for a given amount of energy in a broadcast network. We also find that, in terms of dissemination rate and energy usage, the SPIN protocols perform close to the theoretical optimum in both point-to-point and broadcast networks.",
"title": ""
}
] |
[
{
"docid": "be4fbfdde6ec503bebd5b2a8ddaa2820",
"text": "Attack-defence Capture The Flag (CTF) competitions are effective pedagogic platforms to teach secure coding practices due to the interactive and real-world experiences they provide to the contest participants. Two of the key challenges that prevent widespread adoption of such contests are: 1) The game infrastructure is highly resource intensive requiring dedication of significant hardware resources and monitoring by organizers during the contest and 2) the participants find the gameplay to be complicated, requiring performance of multiple tasks that overwhelms inexperienced players. In order to address these, we propose a novel attack-defence CTF game infrastructure which uses application containers. The results of our work showcase effectiveness of these containers and supporting tools in not only reducing the resources organizers need but also simplifying the game infrastructure. The work also demonstrates how the supporting tools can be leveraged to help participants focus more on playing the game i.e. attacking and defending services and less on administrative tasks. The results from this work indicate that our architecture can accommodate over 150 teams with 15 times fewer resources when compared to existing infrastructures of most contests today.",
"title": ""
},
{
"docid": "71c4f414520c171aca6e88753c9ef179",
"text": "This brief presents an ultralow quiescent class-AB error amplifier (ERR AMP) of low dropout (LDO) and a slew-rate (SR) enhancement circuit to minimize compensation capacitance and speed up transient response designed in the 0.11-μm 1-poly 6-metal CMOS process. In order to increase the current capability with a low standby quiescent current under large-signal operation, the proposed scheme has a class-AB-operation operational transconductance amplifier (OTA) that acts as an ERR AMP. As a result, the new OTA achieved a higher dc gain and faster settling time than conventional OTAs, demonstrating a dc gain improvement of 15.8 dB and a settling time six times faster than that of a conventional OTA. The proposed additional SR enhancement circuit improved the response based on voltage-spike detection when the voltage dramatically changed at the output node.",
"title": ""
},
{
"docid": "db3758b88c374135c1c7c935c09ba233",
"text": "Graphical models provide a rich framework for summarizing the dependencies among variables. The graphical lasso approach attempts to learn the structure of a Gaussian graphical model (GGM) by maximizing the log likelihood of the data, subject to an l1 penalty on the elements of the inverse co-variance matrix. Most algorithms for solving the graphical lasso problem do not scale to a very large number of variables. Furthermore, the learned network structure is hard to interpret. To overcome these challenges, we propose a novel GGM structure learning method that exploits the fact that for many real-world problems we have prior knowledge that certain edges are unlikely to be present. For example, in gene regulatory networks, a pair of genes that does not participate together in any of the cellular processes, typically referred to as pathways, is less likely to be connected. In computer vision applications in which each variable corresponds to a pixel, each variable is likely to be connected to the nearby variables. In this paper, we propose the pathway graphical lasso, which learns the structure of a GGM subject to pathway-based constraints. In order to solve this problem, we decompose the network into smaller parts, and use a message-passing algorithm in order to communicate among the subnetworks. Our algorithm has orders of magnitude improvement in run time compared to the state-of-the-art optimization methods for the graphical lasso problem that were modified to handle pathway-based constraints.",
"title": ""
},
{
"docid": "19e070089a8495a437e81da50f3eb21c",
"text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.",
"title": ""
},
{
"docid": "6ad7d97140d7a5d6b72039b4bb9c3be5",
"text": "This study evaluated the criterion-related validity of the Electronic Head Posture Instrument (EHPI) in measuring the craniovertebral (CV) angle by correlating the measurements of CV angle with anterior head translation (AHT) in lateral cervical radiographs. It also investigated the correlation of AHT and CV angle with the Chinese version of the Northwick Park Questionnaire (NPQ) and Numeric Pain Rating Scale (NPRS). Thirty patients with diagnosis of mechanical neck pain for at least 3 months without referred symptoms were recruited in an outpatient physiotherapy clinic. The results showed that AHT measured with X-ray correlated negatively with CV angle measured with EHPI (r = -0.71, p < 0.001). CV angle also correlated negatively with NPQ (r = -0.67, p < 0.001) and NPRS (r = -0.70, p < 0.001), while AHT positively correlated with NPQ (r = 0.390, p = 0.033) and NPRS (r = 0.49, p = 0.006). We found a negative correlation between CV angle measured with the EHPI and AHT measured with the X-ray lateral film as well as with NPQ and NPRS in patients with chronic mechanical neck pain. EHPI is a valid tool in clinically assessing and evaluating cervical posture of patients with chronic mechanical neck pain.",
"title": ""
},
{
"docid": "442504997ef102d664081b390ff09dd3",
"text": "An intelligent traffic management system (E-Traffic Warden) is proposed, using image processing techniques along with smart traffic control algorithm. Traffic recognition was achieved using cascade classifier for vehicle recognition utilizing Open CV and Visual Studio C/C++. The classifier was trained on 700 positive samples and 1140 negative samples. The results show that the accuracy of vehicle detection is approximately 93 percent. The count of vehicles at all approaches of intersection is used to estimate traffic. Traffic build up is then avoided or resolved by passing the extracted data to traffic control algorithm. The control algorithm shows approximately 86% improvement over Fixed-Delay controller in worst case scenarios.",
"title": ""
},
{
"docid": "1c66d84dfc8656a23e2a4df60c88ab51",
"text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.",
"title": ""
},
{
"docid": "aed7133c143edbe0e1c6f6dfcddee9ec",
"text": "This paper describes a version of the auditory image model (AIM) [1] implemented in MATLAB. It is referred to as “aim-mat” and it includes the basic modules that enable AIM to simulate the spectral analysis, neural encoding and temporal integration performed by the auditory system. The dynamic representations produced by non-static sounds can be viewed on a frame-by-frame basis or in movies with synchronized sound. The software has a sophisticated graphical user interface designed to facilitate the auditory modelling. It is also possible to add MATLAB code and complete modules to aim-mat. The software can be downloaded from http://www.mrccbu.cam.ac.uk/cnbh/aimmanual",
"title": ""
},
{
"docid": "e011ab57139a9a2f6dc13033b0ab6223",
"text": "Over the last few years, virtual reality (VR) has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate immersive experiences at low cost. One of the remaining challenges to further improve visual comfort in VR experiences is the vergence-accommodation conflict inherent to all stereoscopic displays. Accurate reproduction of all depth cues is crucial for visual comfort. By combining well-known stereoscopic display principles with emerging factored light field technology, we present the first wearable VR display supporting high image resolution as well as focus cues. A light field is presented to each eye, which provides more natural viewing experiences than conventional near-eye displays. Since the eye box is just slightly larger than the pupil size, rank-1 light field factorizations are sufficient to produce correct or nearly-correct focus cues; no time-multiplexed image display or gaze tracking is required. We analyze lens distortions in 4D light field space and correct them using the afforded high-dimensional image formation. We also demonstrate significant improvements in resolution and retinal blur quality over related near-eye displays. Finally, we analyze diffraction limits of these types of displays.",
"title": ""
},
{
"docid": "806a83d17d242a7fd5272862158db344",
"text": "Solar power has become an attractive alternative of electricity energy. Solar cells that form the basis of a solar power system are mainly based on multicrystalline silicon. A set of solar cells are assembled and interconnected into a large solar module to offer a large amount of electricity power for commercial applications. Many defects in a solar module cannot be visually observed with the conventional CCD imaging system. This paper aims at defect inspection of solar modules in electroluminescence (EL) images. The solar module charged with electrical current will emit infrared light whose intensity will be darker for intrinsic crystal grain boundaries and extrinsic defects including micro-cracks, breaks and finger interruptions. The EL image can distinctly highlight the invisible defects but also create a random inhomogeneous background, which makes the inspection task extremely difficult. The proposed method is based on independent component analysis (ICA), and involves a learning and a detection stage. The large solar module image is first divided into small solar cell subimages. In the training stage, a set of defect-free solar cell subimages are used to find a set of independent basis images using ICA. In the inspection stage, each solar cell subimage under inspection is reconstructed as a linear combination of the learned basis images. The coefficients of the linear combination are used as the feature vector for classification. Also, the reconstruction error between the test image and its reconstructed image from the ICA basis images is also evaluated for detecting the presence of defects. Experimental results have shown that the image reconstruction with basis images distinctly outperforms the ICA feature extraction approach. It can achieve a mean recognition rate of 93.4% for a set of 80 test samples.",
"title": ""
},
{
"docid": "c3317ea39578195cab8801b8a31b21b6",
"text": "We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyperparameters. Inspired by the principle of “optimism under uncertainty,” we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on n samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.",
"title": ""
},
{
"docid": "ce3ac7716734e2ebd814900d77ca3dfb",
"text": "The large pose discrepancy between two face images is one of the fundamental challenges in automatic face recognition. Conventional approaches to pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes a Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator enables DR-GAN to learn a representation that is both generative and discriminative, which can be used for face image synthesis and pose-invariant face recognition. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified identity representation along with an arbitrary number of synthetic face images. Extensive quantitative and qualitative evaluation on a number of controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art in both learning representations and rotating large-pose face images.",
"title": ""
},
{
"docid": "e737bb31bb7dbb6dbfdfe0fd01bfe33c",
"text": "Cannabidiol (CBD) is a non-psychotomimetic phytocannabinoid derived from Cannabis sativa. It has possible therapeutic effects over a broad range of neuropsychiatric disorders. CBD attenuates brain damage associated with neurodegenerative and/or ischemic conditions. It also has positive effects on attenuating psychotic-, anxiety- and depressive-like behaviors. Moreover, CBD affects synaptic plasticity and facilitates neurogenesis. The mechanisms of these effects are still not entirely clear but seem to involve multiple pharmacological targets. In the present review, we summarized the main biochemical and molecular mechanisms that have been associated with the therapeutic effects of CBD, focusing on their relevance to brain function, neuroprotection and neuropsychiatric disorders.",
"title": ""
},
{
"docid": "2b595cab271cac15ea165e46459d6923",
"text": "Autonomous Mobility On Demand (MOD) systems can utilize fleet management strategies in order to provide a high customer quality of service (QoS). Previous works on autonomous MOD systems have developed methods for rebalancing single capacity vehicles, where QoS is maintained through large fleet sizing. This work focuses on MOD systems utilizing a small number of vehicles, such as those found on a campus, where additional vehicles cannot be introduced as demand for rides increases. A predictive positioning method is presented for improving customer QoS by identifying key locations to position the fleet in order to minimize expected customer wait time. Ridesharing is introduced as a means for improving customer QoS as arrival rates increase. However, with ridesharing perceived QoS is dependent on an often unknown customer preference. To address this challenge, a customer ratings model, which learns customer preference from a 5-star rating, is developed and incorporated directly into a ridesharing algorithm. The predictive positioning and ridesharing methods are applied to simulation of a real-world campus MOD system. A combined predictive positioning and ridesharing approach is shown to reduce customer service times by up to 29%. and the customer ratings model is shown to provide the best overall MOD fleet management performance over a range of customer preferences.",
"title": ""
},
{
"docid": "60de343325a305b08dfa46336f2617b5",
"text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.",
"title": ""
},
{
"docid": "31be3d5db7d49d1bfc58c81efec83bdc",
"text": "Electromagnetic elements such as inductance are not used in switched-capacitor converters to convert electrical power. In contrast, capacitors are used for storing and transforming the electrical power in these new topologies. Lower volume, higher power density, and more integration ability are the most important features of these kinds of converters. In this paper, the most important switched-capacitor converters topologies, which have been developed in the last decade as new topologies in power electronics, are introduced, analyzed, and compared with each other, in brief. Finally, a 100 watt double-phase half-mode resonant converter is simulated to convert 48V dc to 24 V dc for light weight electrical vehicle applications. Low output voltage ripple (0.4%), and soft switching for all power diodes and switches are achieved under the worst-case conditions.",
"title": ""
},
{
"docid": "d6cca63107e04f225b66e02289c601a2",
"text": "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with a hashtag such as ‘#sarcasm’. We collected a training corpus of about 406 thousand Dutch tweets with hashtag synonyms denoting sarcasm. Assuming that the human labeling is correct (annotation of a sample indicates that about 90% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a sample of a day’s stream of 2.25 million Dutch tweets. Of the 353 explicitly marked tweets on this day, we detect 309 (87%) with the hashtag removed. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 35% of the top250 ranked tweets are indeed sarcastic. Analysis indicates that the use of hashtags reduces the further use of linguistic markers for signaling sarcasm, such as exclamations and intensifiers. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of non-verbal expressions that people employ in live interaction when conveying sarcasm. Checking the consistency of our finding in a language from another language family, we observe that in French the hashtag ‘#sarcasme’ has a similar polarity switching function, be it to a lesser extent. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0502b30d45e6f51a7eb0eeec1f0af2e9",
"text": "Identification and extraction of singing voice from within musical mixtures is a key challenge in sourc e separation and machine audition. Recently, deep neural network s (DNN) have been used to estimate 'ideal' binary masks for carefully controlled cocktail party speech separation problems. However, it is not yet known whether these methods are capab le of generalizing to the discrimination of voice and non -voice in the context of musical mixtures. Here, we trained a con volutional DNN (of around a billion parameters) to provide probabilistic estimates of the ideal binary mask for separation o f vocal sounds from real-world musical mixtures. We contrast our DNN results with more traditional linear methods. Our approach may be useful for automatic removal of vocal sounds from musical mixtures for 'karaoke' type applications.",
"title": ""
},
{
"docid": "27ddea786e06ffe20b4f526875cdd76b",
"text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .",
"title": ""
}
] |
scidocsrr
|
d51e620d0827c768462fdccfb6158405
|
Sample-efficient Actor-Critic Reinforcement Learning with Supervised Data for Dialogue Management
|
[
{
"docid": "5cc1f15c45f57d1206e9181dc601ee4a",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.",
"title": ""
},
{
"docid": "c5bbdfc0da1635ad0a007e60e224962f",
"text": "Natural gradient descent is an optimization method traditionally motivated from the perspective of information geometry, and works well for many applications as an alternative to stochastic gradient descent. In this paper we critically analyze this method and its properties, and show how it can be viewed as a type of approximate 2nd-order optimization method, where the Fisher information matrix used to compute the natural gradient direction can be viewed as an approximation of the Hessian. This perspective turns out to have significant implications for how to design a practical and robust version of the method. Among our various other contributions is a thorough analysis of the convergence speed of natural gradient descent and more general stochastic methods, a critical examination of the oft-used “empirical” approximation of the Fisher matrix, and an analysis of the (approximate) parameterization invariance property possessed by the method, which we show still holds for certain other choices of the curvature matrix, but notably not the Hessian. ∗jmartens@cs.toronto.edu 1 ar X iv :1 41 2. 11 93 v5 [ cs .L G ] 1 O ct 2 01 5",
"title": ""
},
{
"docid": "0f5959e5952a029cbe7807dc0268e25e",
"text": "We describe a two-step approach for dialogue management in task-oriented spoken dialogue systems. A unified neural network framework is proposed to enable the system to first learn by supervision from a set of dialogue data and then continuously improve its behaviour via reinforcement learning, all using gradientbased algorithms on one single model. The experiments demonstrate the supervised model’s effectiveness in the corpus-based evaluation, with user simulation, and with paid human subjects. The use of reinforcement learning further improves the model’s performance in both interactive settings, especially under higher-noise conditions.",
"title": ""
},
{
"docid": "3486d3493a0deef5c3c029d909e3cdfc",
"text": "To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will speed up reinforcement learning. This paper compares eight reinforcement learning frameworks: adaptive heuristic critic (AHC) learning due to Sutton, Q-learning due to Watkins, and three extensions to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as an approach to generalization. To evaluate the performance of different frameworks, a dynamic environment was used as a testbed. The environment is moderately complex and nondeterministic. This paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks.",
"title": ""
},
{
"docid": "a986826041730d953dfbf9fbc1b115a6",
"text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"title": ""
}
] |
[
{
"docid": "5e4ab26751f36cad7b348320d71dd937",
"text": "In this paper we examine the relations between parent spatial language input, children's own production of spatial language, and children's later spatial abilities. Using a longitudinal study design, we coded the use of spatial language (i.e. words describing the spatial features and properties of objects; e.g. big, tall, circle, curvy, edge) from child age 14 to 46 months in a diverse sample of 52 parent-child dyads interacting in their home settings. These same children were given three non-verbal spatial tasks, items from a Spatial Transformation task (Levine et al., 1999), the Block Design subtest from the WPPSI-III (Wechsler, 2002), and items on the Spatial Analogies subtest from Primary Test of Cognitive Skills (Huttenlocher & Levine, 1990) at 54 months of age. We find that parents vary widely in the amount of spatial language they use with their children during everyday interactions. This variability in spatial language input, in turn, predicts the amount of spatial language children produce, controlling for overall parent language input. Furthermore, children who produce more spatial language are more likely to perform better on spatial problem solving tasks at a later age.",
"title": ""
},
{
"docid": "cff8ae2635684a6f0e07142175b7fbf1",
"text": "Collaborative writing is on the increase. In order to write well together, authors often need to be aware of who has done what recently. We offer a new tool, DocuViz, that displays the entire revision history of Google Docs, showing more than the one-step-at-a-time view now shown in revision history and tracking changes in Word. We introduce the tool and present cases in which the tool has the potential to be useful: To authors themselves to see recent \"seismic activity,\" indicating where in particular a co-author might want to pay attention, to instructors to see who has contributed what and which changes were made to comments from them, and to researchers interested in the new patterns of collaboration made possible by simultaneous editing capabilities.",
"title": ""
},
{
"docid": "e3459fda9310bb18e55caf505b13a08a",
"text": "Variable-speed pulsewidth-modulated (PWM) drives allow for precise speed control of induction motors, as well as a high power factor and fast response characteristics, compared with nonelectronic speed controllers. However, due to the high switching frequencies and the high dV/dt, there are increased dielectric stresses in the insulation system of the motor, leading to premature failure, in high power and medium- and high-voltage motors. Studying the degradation mechanism of these insulation systems on an actual motor is both extremely costly and impractical. In addition, to replicate the aging process, the same waveform that the motor is subjected to should be applied to the test samples. As a result, a low-power two-level high-voltage PWM inverter has been built to replicate the voltage waveforms for aging processes. This generator allows for testing the insulation systems considering a real PWM waveform in which both the fast pulses and the fundamental low frequency are included. The results show that the effects of PWM waveforms cannot be entirely replicated by a unipolar pulse generator.",
"title": ""
},
{
"docid": "266b9bfde23fdfaedb35d293f7293c93",
"text": "We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.",
"title": ""
},
{
"docid": "47baaddefd3476ce55d39a0f111ade5a",
"text": "We propose a novel method for classifying resume data of job applicants into 27 different job categories using convolutional neural networks. Since resume data is costly and hard to obtain due to its sensitive nature, we use domain adaptation. In particular, we train a classifier on a large number of freely available job description snippets and then use it to classify resume data. We empirically verify a reasonable classification performance of our approach despite having only a small amount of labeled resume data available.",
"title": ""
},
{
"docid": "bcda82b5926620060f65506ccbac042f",
"text": "This paper investigates spirolaterals for their beauty of form and the unexpected complexity arising from them. From a very simple generative procedure, spirolaterals can be created having great complexity and variation. Using mathematical and computer-based methods, issues of closure, variation, enumeration, and predictictability are discussed. A historical review is also included. The overriding interest in this research is to develop methods and procedures to investigate geometry for the purpose of inspiration for new architectural and sculptural forms. This particular phase will concern the two dimensional representations of spirolaterals.",
"title": ""
},
{
"docid": "b2cf33b05e93d1c15a32a54e8bc60bed",
"text": "Prevention of fraud and abuse has become a major concern of many organizations. The industry recognizes the problem and is just now starting to act. Although prevention is the best way to reduce frauds, fraudsters are adaptive and will usually find ways to circumvent such measures. Detecting fraud is essential once prevention mechanism has failed. Several data mining algorithms have been developed that allow one to extract relevant knowledge from a large amount of data like fraudulent financial statements to detect. In this paper we present an efficient approach for fraud detection. In our approach we first maintain a log file for data which contain the content separated by space, position and also the frequency. Then we encrypt the data by substitution method and send to the receiver end. We also send the log file to the receiver end before proceed to the encryption which is also in the form of secret message. So the receiver can match the data according to the content, position and frequency, if there is any mismatch occurs, we can detect the fraud and does not accept the file.",
"title": ""
},
{
"docid": "7cf625ce06d335d7758c868514b4c635",
"text": "Jeffrey's rule of conditioning has been proposed in order to revise a probability measure by another probability function. We generalize it within the framework of the models based on belief functions. We show that several forms of Jeffrey's conditionings can be defined that correspond to the geometrical rule of conditioning and to Dempster's rule of conditioning, respectively. 1. Jeffrey's rule in probability theory. In probability theory conditioning on an event . is classically obtained by the application of Bayes' rule. Let (Q, � , P) be a probability space where P(A) is the probability of the event Ae � where� is a Boolean algebra defined on a finite2 set n. P(A) quantified the degree of belief or the objective probability, depending on the interpretation given to the probability measure, that a particular arbitrary element m of n which is not a priori located in any of the sets of� belongs to a particular set Ae�. Suppose it is known that m belongs to Be� and P(B)>O. The probability measure P must be updated into PB that quantifies the same event as previously but after taking in due consideration the know ledge that me B. PB is obtained by Bayes' rule of conditioning: This rule can be obtained by requiring that: 81: VBE�. PB(B) = 1 82: VBe�, VX,Ye� such that X.Y�B. and PJ3(X) _ P(X) PB(Y)P(Y) PB(Y) = 0 ifP(Y)>O",
"title": ""
},
{
"docid": "6c4d6eff1fb7ef03efc3197726545ed8",
"text": "Gait enjoys advantages over other biometrics in that it can be perceived from a distance and is di/cult to disguise. Current approaches are mostly statistical and concentrate on walking only. By analysing leg motion we show how we can recognise people not only by the walking gait, but also by the running gait. This is achieved by either of two new modelling approaches which employ coupled oscillators and the biomechanics of human locomotion as the underlying concepts. These models give a plausible method for data reduction by providing estimates of the inclination of the thigh and of the leg, from the image data. Both approaches derive a phase-weighted Fourier description gait signature by automated non-invasive means. One approach is completely automated whereas the other requires speci5cation of a single parameter to distinguish between walking and running. Results show that both gaits are potential biometrics, with running being more potent. By its basis in evidence gathering, this new technique can tolerate noise and low resolution. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c8a0276919005f36a587d7d209063e2f",
"text": "Praveen Prakash1, Kuttapa Nishanth2, Nikul Jasani1, Aneesh Katyal1, US Krishna Nayak3 1Post Graduate Student, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India, 2Professor, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India, 3Dean Academics, Head of Department, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India",
"title": ""
},
{
"docid": "22841c2d63cf94f76643244475b547cb",
"text": "Problems of reference, identity, and meaning are becoming increasingly endemic on the Web. We focus first on the convergence between Web architecture and classical problems in philosophy, leading to the advent of “philosophical engineering.” We survey how the Semantic Web initiative in particular provoked an “identity crisis” for the Web due to its use of URIs for both “things” and web pages and the W3C’s proposed solution. The problem of reference is inspected in relation to both the direct object theory of reference of Russell and the causal theory of reference of Kripke, and the proposed standards of new URN spaces and Published Subjects. Then we progress onto the problem of meaning in light of the Fregean slogan of the priority of truth over reference and the notion of logical interpretation. The popular notion of “social meaning” and the practice of tagging as a possible solution is analyzed in light of the ideas of Lewis on convention. Finally, we conclude that a full notion of meaning, identity, and reference may be possible, but that it is an open problem whether or not practical implementations and standards can be created. 1. PHILOSOPHICAL ENGINEERING While the Web epitomizes the beginning of a new digital era, it has also caused an untimely return of philosophical issues in identify, reference, and meaning. These questions are thought of as a “black hole” that has long puzzled philosophers and logicians. Up until now, there has been little incentive outside academic philosophy to solve these issues in any practical manner. Could there be any connection between the fast-paced world of the Web and philosophers who dwell upon unsolvable questions? In a surprising move, the next stage in the development of the Web seems to be signalling a return to the very same questions of identity, reference, and meaning that have troubled philosophers for so long. While the hypertext Web has skirted around these questions, attempts at increasing the scope of the Web can not: “The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation” [41]. Meaning is a thorny word: do we define meaning as “machine-readable” or “has a relation to a formal model?” Or do we define meaning as “easily understood by humans,” or “somehow connected to the world in a roCopyright is held by the author/owner(s). WWW2006, May 22–26, 2006, Edinburgh, UK. . bust manner?” Further progress in creating both satisfying and pragmatic solutions to these problems in the context of the Web is possible since currently many of these questions are left underspecified by current Web standards. While many in philosophy seem to be willing to hedge their bets in various ideological camps, on the Web there is a powerful urge to co-operate. There is a distinct difference between the classical posing of these questions in philosophy and these questions in the context of the Web, since the Web is a human artifact. The inventor of the Web, Tim Berners-Lee, summarized this position: “We are not analyzing a world, we are building it. We are not experimental philosophers, we are philosophical engineers” [2]. 2. THE IDENTITY CRISIS OF URIS The first step in the creation of the Semantic Web was to extend the use of a URI (Uniform Resource Identifier) to identify not just web pages, but anything. This was historically always part of Berners-Lee’s vision, but only recently came to light with Semantic Web standardization efforts and has caused disagreement from some of the other original Web architects like Larry Masinter, co-author of the URI standard [4]. In contrast to past practice that generally used URIs for web pages, URIs could be given to things traditionally thought of as “not on the Web” such as concepts and people. The guiding example is that instead of just visiting Tim Berners-Lee’s web page to retrieve a representation of Tim Berners-Lee via http, you could use the Semantic Web to make statements about Tim himself, such as where he works or the color of his hair. Early proposals made a chasm across URIs, dividing them into URLs and URNs. URIs for web pages (documents) are URLs (Uniform Resource Locators) that could use a scheme such as http to perform a “variety of operations” on a resource[5]. In contrast, URNs (Uniform Resource Names) purposely avoided such access mechanisms in order to create “persistent, location-independent, resource identifiers” [29]. URNs were not widely adopted, perhaps due to their centralized nature that required explicitly registering them with IANA. In response, URLs were just called “URIs” and used not only for web pages, but for things not on the web. Separate URN standards such as Masinter’s tdb URN space have been declared, but have not been widely adopted [27]. Instead, people use http in general to identify both web pages and things. There is one sensible solution to get a separate URI for the thing if one has a URI that currently serves a representation of a thing, but one wishes to make statements about the thing itself. First, one can use URI redirection for a URI about a thing and then resolve the redirection to a more informative web page [28]. The quickest way to do this is to append a “hash” (fragment identifier) onto the end of a URI, and so the redirection happens automatically. This is arguably an abuse of fragment identifiers which were originally meant for client-side processing. Yet according to the W3C, using a fragment identifier technically also identifies a separate and distinct “secondary resource” [23]. Regardless, this ability to talk about anything with URIs leads to a few practical questions: Can I make a statement on the Semantic Web about Tim Berners-Lee by making a statement about his home-page? If he is using a separate URI for himself, should he deliver a representation of himself? However, in all these cases there is the lurking threat of ambiguity: There is no principled way to distinguish a URI for a web page versus a URI for a thing “not on the Web.” This was dubbed the Identity Crisis, and has spawned endless discussions ever since [9]. For web pages (or “documents”) it’s pretty easy to tell what a URI identifies: The URI identifies the stream of bits that one gets when one accesses the URI with whatever operations are allowed by the scheme of the URI. Therefore, unlike names in natural language, URIs often imply the potential possession of whatever representations the URI gives one access to, and in a Wittgenstein-like move David Booth declares that there is only a myth of identity [7]. What a URI identifies or means is a question of use. “The problem of URI identity is the problem of locating appropriate descriptive information about the associated resource – descriptive information that enables you to make use of that URI in a particular application” [7]. In general, this should be the minimal amount of information one can get away with to make sure that the URI is used properly in a particular application. However, if the meaning of a URI is its use, then this use can easily change between applications, and nothing about the meaning (use) of a URI should be assumed to be invariant across applications. While this is a utilitarian and attractive reading, it prevents the one thing the Web is supposed to allow: a universal information space. Since the elementary building blocks of this space, URIs, are meaningless without the concrete context of an application, and each applications may have orthogonal contexts, there is no way an application can share its use of URIs in general with other applications. 3. URIS IDENTIFY ONE THING Tim Berners-Lee has stated that URIs “identify one thing” [3]. This thing is a resource. The most current IETF RFC for URIs states that it does not “limit the scope of what might be a resource” but that a resource “is used in a general sense for whatever might be identified by a URI” such as “human beings, corporations, and bound books in a library” and even “abstract concepts” [4]. An earlier RFC tried to ground out the concept of a resource as “the conceptual mapping to an entity or set of entities, not necessarily the entity which corresponds to that mapping at any particular instance in time” in order to deal with changes in particular representations over time, as exemplified by the web sites of newspapers like http://www.guardian.co.uk [4]. If a URI identifies a conceptual mapping, in whose head in that conceptual mapping? The user of the URI, the owner of the URI, or some common understanding? While Tim Berners-Lee argues that the URI owner dictates what thing the URI identifies, Larry Masinter believes the user should be the final authority, and calls this “the power of readers over writers.” Yet this psychological middle-man is dropped in the latest RFC that states that URIs provide “a simple and extensible means for identifying a resource,” and a resource is “whatever might be identified by a URI” [4]. In this manner, a resource and its URI become disturbing close to a tautology. Given a URI, what does it identify? A resource. What’s a resource? It’s what the URI identifies. According to Berners-Lee, in a given RDF statement, a URI should identify one resource. Furthermore, this URI identifies one thing in a “global context” [2]. This position taken to an extreme leads to problems: given two textually distinct URIs, is it possible they could identify the same thing? How can we judge if they identify the same thing? The classic definition of identity is whether or not two objects are in fact, on some given level, the same. The classic formulation is Leibniz’s Law, which states if two objects have all their properties in common, then they are identical and so only one object [25]. With web pages, one can compare the representations byte-by-byte even if the URIs are different, and so we can say two mirrors of a web pages have the sa",
"title": ""
},
{
"docid": "0626c39604a1dde16a5d27de1c4cef24",
"text": "Two dimensional (2D) materials with a monolayer of atoms represent an ultimate control of material dimension in the vertical direction. Molybdenum sulfide (MoS2) monolayers, with a direct bandgap of 1.8 eV, offer an unprecedented prospect of miniaturizing semiconductor science and technology down to a truly atomic scale. Recent studies have indeed demonstrated the promise of 2D MoS2 in fields including field effect transistors, low power switches, optoelectronics, and spintronics. However, device development with 2D MoS2 has been delayed by the lack of capabilities to produce large-area, uniform, and high-quality MoS2 monolayers. Here we present a self-limiting approach that can grow high quality monolayer and few-layer MoS2 films over an area of centimeters with unprecedented uniformity and controllability. This approach is compatible with the standard fabrication process in semiconductor industry. It paves the way for the development of practical devices with 2D MoS2 and opens up new avenues for fundamental research.",
"title": ""
},
{
"docid": "3ee39231fc2fbf3b6295b1b105a33c05",
"text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.",
"title": ""
},
{
"docid": "9932770cfc46cee41fb0b37a72771410",
"text": "This study explores the extent to which a bilingual advantage can be observed for three tasks in an established population of fully fluent bilinguals from childhood through adulthood. Welsh-English simultaneous and early sequential bilinguals, as well as English monolinguals, aged 3 years through older adults, were tested on three sets of cognitive and executive function tasks. Bilinguals were Welsh-dominant, balanced, or English-dominant, with only Welsh, Welsh and English, or only English at home. Card sorting, Simon, and a metalinguistic judgment task (650, 557, and 354 participants, respectively) reveal little support for a bilingual advantage, either in relation to control or globally. Primarily there is no difference in performance across groups, but there is occasionally better performance by monolinguals or persons dominant in the language being tested, and in one case-in one condition and in one age group-lower performance by the monolinguals. The lack of evidence for a bilingual advantage in these simultaneous and early sequential bilinguals suggests the need for much closer scrutiny of what type of bilingual might demonstrate the reported effects, under what conditions, and why.",
"title": ""
},
{
"docid": "58640b446a3c03ab8296302498e859a5",
"text": "With Islands of Music we present a system which facilitates exploration of music libraries without requiring manual genre classification. Given pieces of music in raw audio format we estimate their perceived sound similarities based on psychoacoustic models. Subsequently, the pieces are organized on a 2-dimensional map so that similar pieces are located close to each other. A visualization using a metaphor of geographic maps provides an intuitive interface where islands resemble genres or styles of music. We demonstrate the approach using a collection of 359 pieces of music.",
"title": ""
},
{
"docid": "b5d18b82e084042a6f31cb036ee83af5",
"text": "In this paper, signal and power integrity of complete High Definition Multimedia Interface (HDMI) channel with IBIS-AMI model is presented. Gigahertz serialization and deserialization (SERDES) has become a leading inter-chip and inter-board data transmission technique in high-end computing devices. The IBIS-AMI model is used for circuit simulation of high-speed serial interfaces. A 3D frequency-domain simulator (FEM) was used to estimate the channel loss for data bus and HDMI connector. Compliance testing is performed for HDMI channels to ensure channel parameters are meeting HDMI specifications.",
"title": ""
},
{
"docid": "28c03f6fb14ed3b7d023d0983cb1e12b",
"text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"title": ""
},
{
"docid": "697ae7ff6a0ace541ea0832347ba044f",
"text": "The repair of wounds is one of the most complex biological processes that occur during human life. After an injury, multiple biological pathways immediately become activated and are synchronized to respond. In human adults, the wound repair process commonly leads to a non-functioning mass of fibrotic tissue known as a scar. By contrast, early in gestation, injured fetal tissues can be completely recreated, without fibrosis, in a process resembling regeneration. Some organisms, however, retain the ability to regenerate tissue throughout adult life. Knowledge gained from studying such organisms might help to unlock latent regenerative pathways in humans, which would change medical practice as much as the introduction of antibiotics did in the twentieth century.",
"title": ""
}
] |
scidocsrr
|
a415503ceb55bfe061cf67864f66da36
|
Insight and reduction of MapReduce stragglers in heterogeneous environment
|
[
{
"docid": "8222f36e2aa06eac76085fb120c8edab",
"text": "Small jobs, that are typically run for interactive data analyses in datacenters, continue to be plagued by disproportionately long-running tasks called stragglers. In the production clusters at Facebook and Microsoft Bing, even after applying state-of-the-art straggler mitigation techniques, these latency sensitive jobs have stragglers that are on average 8 times slower than the median task in that job. Such stragglers increase the average job duration by 47%. This is because current mitigation techniques all involve an element of waiting and speculation. We instead propose full cloning of small jobs, avoiding waiting and speculation altogether. Cloning of small jobs only marginally increases utilization because workloads show that while the majority of jobs are small, they only consume a small fraction of the resources. The main challenge of cloning is, however, that extra clones can cause contention for intermediate data. We use a technique, delay assignment, which efficiently avoids such contention. Evaluation of our system, Dolly, using production workloads shows that the small jobs speedup by 34% to 46% after state-of-the-art mitigation techniques have been applied, using just 5% extra resources for cloning.",
"title": ""
}
] |
[
{
"docid": "b9538c45fc55caff8b423f6ecc1fe416",
"text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.",
"title": ""
},
{
"docid": "5183794d8bef2d8f2ee4048d75a2bd3c",
"text": "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.",
"title": ""
},
{
"docid": "d612ca22b9895c0e85f2b64327a1b22c",
"text": "Physical inactivity has been associated with increasing prevalence and mortality of cardiovascular and other diseases. The purpose of this study is to identify if there is an association between, self–efficacy, mental health, and physical inactivity among university students. The study comprises of 202 males and 692 females age group 18-25 years drawn from seven faculties selected using a table of random numbers. Questionnaires were used for the data collection. The findings revealed that the prevalence of physical inactivity among the respondents was 41.4%. Using a univariate analysis, the study showed that there was an association between gender (female), low family income, low self-efficacy, respondents with mental health probable cases and physical inactivity (p<0.05).Using a multivariate analysis, physical inactivity was higher among females(OR = 3.72, 95% CI = 2.399-5.788), low family income (OR = 4.51, 95% CI = 3.266 – 6.241), respondents with mental health probable cases (OR = 1.58, 95% CI = 1.1362.206) and low self-efficacy for pysical activity(OR = 1.86, 95% CI = 1.350 2.578).Conclusively there is no significant decrease in physical inactivity among university students when compared with previous studies in this population, it is therefore recommended that counselling on mental health, physical activity awareness among new university students should be encouraged. Keyword:Exercise,Mental Health, Self-Efficacy,Physical Inactivity, University students",
"title": ""
},
{
"docid": "0188eb4ef8a87b6cee8657018360fa69",
"text": "This paper presents a pattern division multiple access (PDMA) concept for cellular future radio access (FRA) towards the 2020s information society. Different from the current LTE radio access scheme (until Release 11), PDMA is a novel non-orthogonal multiple access technology based on the total optimization of multiple user communication system. It considers joint design from both transmitter and receiver. At the receiver, multiple users are detected by successive interference cancellation (SIC) detection method. Numerical results show that the PDMA system based on SIC improve the average sum rate of users over the orthogonal system with affordable complexity.",
"title": ""
},
{
"docid": "7b989f3da78e75d9616826644d210b79",
"text": "BACKGROUND\nUse of cannabis is often an under-reported activity in our society. Despite legal restriction, cannabis is often used to relieve chronic and neuropathic pain, and it carries psychotropic and physical adverse effects with a propensity for addiction. This article aims to update the current knowledge and evidence of using cannabis and its derivatives with a view to the sociolegal context and perspectives for future research.\n\n\nMETHODS\nCannabis use can be traced back to ancient cultures and still continues in our present society despite legal curtailment. The active ingredient, Δ9-tetrahydrocannabinol, accounts for both the physical and psychotropic effects of cannabis. Though clinical trials demonstrate benefits in alleviating chronic and neuropathic pain, there is also significant potential physical and psychotropic side-effects of cannabis. Recent laboratory data highlight synergistic interactions between cannabinoid and opioid receptors, with potential reduction of drug-seeking behavior and opiate sparing effects. Legal rulings also have changed in certain American states, which may lead to wider use of cannabis among eligible persons.\n\n\nCONCLUSIONS\nFamily physicians need to be cognizant of such changing landscapes with a practical knowledge on the pros and cons of medical marijuana, the legal implications of its use, and possible developments in the future.",
"title": ""
},
{
"docid": "969c83b4880879f1137284f531c9f94a",
"text": "The extant literature on cross-national differences in approaches to corporate social responsibility (CSR) has mostly focused on developed countries. Instead, we offer two interrelated studies into corporate codes of conduct issued by developing country multinational enterprises (DMNEs). First, we analyse code adoption rates and code content through a mixed methods design. Second, we use multilevel analyses to examine country-level drivers of",
"title": ""
},
{
"docid": "ad004dd47449b977cd30f2454c5af77a",
"text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.",
"title": ""
},
{
"docid": "037df2435ae0f995a40d5cce429af5cb",
"text": "Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract important information to help advance healthcare, make our cities smarter, and innovate in smart home technology. Deep convolutional neural networks, which are at the heart of many emerging Internet-of-Things (IoT) applications, achieve remarkable performance in audio and visual recognition tasks, at the expense of high computational complexity in convolutional layers, limiting their deployability. In this paper, we present an easy-to-implement acceleration scheme, named ADaPT, which can be applied to already available pre-trained networks. Our proposed technique exploits redundancy present in the convolutional layers to reduce computation and storage requirements. Additionally, we also decompose each convolution layer into two consecutive one-dimensional stages to make full use of the approximate model. This technique can easily be applied to existing low power processors, GPUs or new accelerators. We evaluated this technique using four diverse and widely used benchmarks, on hardware ranging from embedded CPUs to server GPUs. Our experiments show an average 3-5x speed-up in all deep models and a maximum 8-9x speed-up on many individual convolutional layers. We demonstrate that unlike iterative pruning based methodology, our approximation technique is mathematically well grounded, robust, does not require any time-consuming retraining, and still achieves speed-ups solely from convolutional layers with no loss in baseline accuracy.",
"title": ""
},
{
"docid": "3df95e4b2b1bb3dc80785b25c289da92",
"text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.",
"title": ""
},
{
"docid": "d4766ccd502b9c35ee83631fadc69aaf",
"text": "The approach proposed by Śliwerski, Zimmermann, and Zeller (SZZ) for identifying bug-introducing changes is at the foundation of several research areas within the software engineering discipline. Despite the foundational role of SZZ, little effort has been made to evaluate its results. Such an evaluation is a challenging task because the ground truth is not readily available. By acknowledging such challenges, we propose a framework to evaluate the results of alternative SZZ implementations. The framework evaluates the following criteria: (1) the earliest bug appearance, (2) the future impact of changes, and (3) the realism of bug introduction. We use the proposed framework to evaluate five SZZ implementations using data from ten open source projects. We find that previously proposed improvements to SZZ tend to inflate the number of incorrectly identified bug-introducing changes. We also find that a single bug-introducing change may be blamed for introducing hundreds of future bugs. Furthermore, we find that SZZ implementations report that at least 46 percent of the bugs are caused by bug-introducing changes that are years apart from one another. Such results suggest that current SZZ implementations still lack mechanisms to accurately identify bug-introducing changes. Our proposed framework provides a systematic mean for evaluating the data that is generated by a given SZZ implementation.",
"title": ""
},
{
"docid": "5c8ab947856945b32d4d3e0edc89a9e0",
"text": "While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.",
"title": ""
},
{
"docid": "5ca75490c015685a1fc670b2ee5103ff",
"text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.",
"title": ""
},
{
"docid": "ac3d9b8a93cb18449b76b2f2ef818d76",
"text": "Slotless brushless dc motors find more and more applications due to their high performance and their low production cost. This paper focuses on the windings inserted in the air gap of these motors and, in particular, to an original production technique that consists in printing them on a flexible printed circuit board. It theoretically shows that this technique, when coupled with an optimization of the winding shape, can improve the power density of about 23% compared with basic skewed and rhombic windings made of round wire. It also presents a first prototype of a winding realized using this technique and an experimental characterization aimed at identifying the importance and the origin of the differences between theory and practice.",
"title": ""
},
{
"docid": "dffc11786d4a0d9247e22445f48d8fca",
"text": "Tuberization in potato (Solanum tuberosum L.) is a complex biological phenomenon which is affected by several environmental cues, genetic factors and plant nutrition. Understanding the regulation of tuber induction is essential to devise strategies to improve tuber yield and quality. It is well established that short-day photoperiods promote tuberization, whereas long days and high-temperatures inhibit or delay tuberization. Worldwide research on this complex biological process has yielded information on the important bio-molecules (proteins, RNAs, plant growth regulators) associated with the tuberization process in potato. Key proteins involved in the regulation of tuberization include StSP6A, POTH1, StBEL5, StPHYB, StCONSTANS, Sucrose transporter StSUT4, StSP5G, etc. Biomolecules that become transported from \"source to sink\" have also been suggested to be important signaling candidates regulating the tuberization process in potatos. Four molecules, namely StSP6A protein, StBEL5 RNA, miR172 and GAs, have been found to be the main candidates acting as mobile signals for tuberization. These biomolecules can be manipulated (overexpressed/inhibited) for improving the tuberization in commercial varieties/cultivars of potato. In this review, information about the genes/proteins and their mechanism of action associated with the tuberization process is discussed.",
"title": ""
},
{
"docid": "926734e0a379f678740d07c1042a5339",
"text": "The increasing pervasiveness of digital technologies, also refered to as \"Internet of Things\" (IoT), offers a wealth of business model opportunities, which often involve an ecosystem of partners. In this context, companies are required to look at business models beyond a firm-centric lens and respond to changed dynamics. However, extant literature has not yet provided actionable approaches for business models for IoT-driven environments. Our research therefore addresses the need for a business model framework that captures the specifics of IoT-driven ecosystems. Applying an iterative design science research approach, the present paper describes (a) the methodology, (b) the requirements, (c) the design and (d) the evaluation of a business model framework that enables researchers and practitioners to visualize, analyze and design business models in the IoT context in a structured and actionable way. The identified dimensions in the framework include the value network of collaborating partners (who); sources of value creation (where); benefits from collaboration (why). Evidence from action research and multiple case studies indicates that the framework is able to depict business models in IoT.",
"title": ""
},
{
"docid": "35c08abd57d2700164373c688c24b2a6",
"text": "Image enhancement is a common pre-processing step before the extraction of biometric features from a fingerprint sample. This can be essential especially for images of low image quality. An ideal fingerprint image enhancement should intend to improve the end-to-end biometric performance, i.e. the performance achieved on biometric features extracted from enhanced fingerprint samples. We use a model from Deep Learning for the task of image enhancement. This work's main contribution is a dedicated cost function which is optimized during training The cost function takes into account the biometric feature extraction. Our approach intends to improve the accuracy and reliability of the biometric feature extraction process: No feature should be missed and all features should be extracted as precise as possible. By doing so, the loss function forced the image enhancement to learn how to improve the suitability of a fingerprint sample for a biometric comparison process. The effectivity of the cost function was demonstrated for two different biometric feature extraction algorithms.",
"title": ""
},
{
"docid": "a870b0b347d15d8e8c788ede7ff5fa4a",
"text": "On the twentieth anniversary of the original publication [10], following ten years of intense activity in the research literature, hardware support for transactional memory (TM) has finally become a commercial reality, with HTM-enabled chips currently or soon-to-be available from many hardware vendors. In this paper we describe architectural support for TM added to a future version of the Power ISA#8482;. Two imperatives drove the development: the desire to complement our weakly-consistent memory model with a more friendly interface to simplify the development and porting of multithreaded applications, and the need for robustness beyond that of some early implementations. In the process of commercializing the feature, we had to resolve some previously unexplored interactions between TM and existing features of the ISA, for example translation shootdown, interrupt handling, atomic read-modify-write primitives, and our weakly consistent memory model. We describe these interactions, the overall architecture, and discuss the motivation and rationale for our choices of architectural semantics, beyond what is typically found in reference manuals.",
"title": ""
},
{
"docid": "9c008dc2f3da4453317ce92666184da0",
"text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.",
"title": ""
},
{
"docid": "5d9106a06f606cefb3b24fb14c72d41a",
"text": "Most existing relation extraction models make predictions for each entity pair locally and individually, while ignoring implicit global clues available in the knowledge base, sometimes leading to conflicts among local predictions from different entity pairs. In this paper, we propose a joint inference framework that utilizes these global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Experimental results on three datasets, in both English and Chinese, show that our framework outperforms the state-of-theart relation extraction models when such clues are applicable to the datasets. And, we find that the clues learnt automatically from existing knowledge bases perform comparably to those refined by human.",
"title": ""
}
] |
scidocsrr
|
c4ff1bae68e8e1d9cde109c65924ede6
|
Enhancing CNN Incremental Learning Capability with an Expanded Network
|
[
{
"docid": "7d112c344167add5749ab54de184e224",
"text": "Since Krizhevsky won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 competition with the brilliant deep convolutional neural networks (D-CNNs), researchers have designed lots of D-CNNs. However, almost all the existing very deep convolutional neural networks are trained on the giant ImageNet datasets. Small datasets like CIFAR-10 has rarely taken advantage of the power of depth since deep models are easy to overfit. In this paper, we proposed a modified VGG-16 network and used this model to fit CIFAR-10. By adding stronger regularizer and using Batch Normalization, we achieved 8.45% error rate on CIFAR-10 without severe overfitting. Our results show that the very deep CNN can be used to fit small datasets with simple and proper modifications and don't need to re-design specific small networks. We believe that if a model is strong enough to fit a large dataset, it can also fit a small one.",
"title": ""
},
{
"docid": "9b1874fb7e440ad806aa1da03f9feceb",
"text": "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added domain, typically as many as the original network. We propose a method called Deep Adaptation Modules (DAM) that constrains newly learned filters to be linear combinations of existing ones. DAMs precisely preserve performance on the original domain, require a fraction (typically 13%, dependent on network architecture) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.",
"title": ""
},
{
"docid": "5092b52243788c4f4e0c53e7556ed9de",
"text": "This work attempts to address two fundamental questions about the structure of the convolutional neural networks (CNN): 1) why a nonlinear activation function is essential at the filter output of all intermediate layers? 2) what is the advantage of the two-layer cascade system over the one-layer system? A mathematical model called the “REctified-COrrelations on a Sphere” (RECOS) is proposed to answer these two questions. After the CNN training process, the converged filter weights define a set of anchor vectors in the RECOS model. Anchor vectors represent the frequently occurring patterns (or the spectral components). The necessity of rectification is explained using the RECOS model. Then, the behavior of a two-layer RECOS system is analyzed and compared with its one-layer counterpart. The LeNet-5 and the MNIST dataset are used to illustrate discussion points. Finally, the RECOS model is generalized to a multilayer system with the AlexNet as an example.",
"title": ""
}
] |
[
{
"docid": "6b1f584a5665bda68a5215de5aed2fc7",
"text": "Most semi-supervised learning models propagate the labels over the Laplacian graph, where the graph should be built beforehand. However, the computational cost of constructing the Laplacian graph matrix is very high. On the other hand, when we do classification, data points lying around the decision boundary (boundary points) are noisy for learning the correct classifier and deteriorate the classification performance. To address these two challenges, in this paper, we propose an adaptive semi-supervised learning model. Different from previous semi-supervised learning approaches, our new model needn't construct the graph Laplacian matrix. Thus, our method avoids the huge computational cost required by previous methods, and achieves a computational complexity linear to the number of data points. Therefore, our method is scalable to large-scale data. Moreover, the proposed model adaptively suppresses the weights of boundary points, such that our new model is robust to the boundary points. An efficient algorithm is derived to alternatively optimize the model parameter and class probability distribution of the unlabeled data, such that the induction of classifier and the transduction of labels are adaptively unified into one framework. Extensive experimental results on six real-world data sets show that the proposed semi-supervised learning model outperforms other related methods in most cases.",
"title": ""
},
{
"docid": "7a9b9633243d84978d9e975744642e18",
"text": "Our aim is to provide a pixel-level object instance labeling of a monocular image. We build on recent work [27] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [27] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [13]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [27].",
"title": ""
},
{
"docid": "e913d5a0d898df3db28b97b27757b889",
"text": "Speech-language pathologists tend to rely on the noninstrumental swallowing evaluation in making recommendations about a patient’s diet and management plan. The present study was designed to examine the sensitivity and specificity of the accuracy of using the chin-down posture during the clinical/bedside swallowing assessment. In 15 patients with acute stroke and clinically suspected oropharyngeal dysphagia, the correlation between clinical and videofluoroscopic findings was examined. Results identified that there is a difference in outcome prediction using the chin-down posture during the clinical/bedside assessment of swallowing compared to assessment by videofluoroscopy. Results are discussed relative to statistical and clinical perspectives, including site of lesion and factors to be considered in the design of an overall treatment plan for a patient with disordered swallowing.",
"title": ""
},
{
"docid": "523a1bc4ac20bd0bbabd85a8eea66c5b",
"text": "Crime is a major social problem in the United States, threatening public safety and disrupting the economy. Understanding patterns in criminal activity allows for the prediction of future high-risk crime “hot spots” and enables police precincts to more effectively allocate officers to prevent or respond to incidents. With the ever-increasing ability of states and organizations to collect and store detailed data tracking crime occurrence, a significant amount of data with spatial and temporal information has been collected. How to use the benefit of massive spatial-temporal information to precisely predict the regional crime rates becomes necessary. The recurrent neural network model has been widely proven effective for detecting the temporal patterns in a time series. In this study, we propose the Spatio-Temporal neural network (STNN) to precisely forecast crime hot spots with embedding spatial information. We evaluate the model using call-for-service data provided by the Portland, Oregon Police Bureau (PPB) for a 5-year period from March 2012 through the end of December 2016. We show that our STNN model outperforms a number of classical machine learning approaches and some alternative neural network architectures.",
"title": ""
},
{
"docid": "aae743c3254352ff973dcb8fbff55299",
"text": "Software Defined Radar is the latest trend in radar development. To handle enhanced radar signal processing techniques, advanced radars need to be able of generating various types of waveforms, such as frequency modulated or phase coded, and to perform multiple functions. The adoption of a Software Defined Radio system makes easier all these abilities. In this work, the implementation of a Software Defined Radar system for target tracking using the Universal Software Radio Peripheral platform is discussed. For the first time, an experimental characterization in terms of radar application is performed on the latest Universal Software Radio Peripheral NI2920, demonstrating a strongly improved target resolution with respect to the first generation platform.",
"title": ""
},
{
"docid": "60ad412d0d6557d2a06e9914bbf3c680",
"text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d3e2efde80890e469684a41287833eb6",
"text": "Recent work has suggested reducing electricity generation cost by cutting the peak to average ratio (PAR) without reducing the total amount of the loads. However, most of these proposals rely on consumer's willingness to act. In this paper, we propose an approach to cut PAR explicitly from the supply side. The resulting cut loads are then distributed among consumers by the means of a multiunit auction which is done by an intelligent agent on behalf of the consumer. This approach is also in line with the future vision of the smart grid to have the demand side matched with the supply side. Experiments suggest that our approach reduces overall system cost and gives benefit to both consumers and the energy provider.",
"title": ""
},
{
"docid": "4a8448ab4c1c9e0a1df5e2d1c1d20417",
"text": "We present an empirical framework for testing game strategies in The Settlers of Catan, a complex win-lose game that lacks any analytic solution. This framework provides the means to change different components of an autonomous agent's strategy, and to test them in suitably controlled ways via performance metrics in game simulations and via comparisons of the agent's behaviours with those exhibited in a corpus of humans playing the game. We provide changes to the game strategy that not only improve the agent's strength, but corpus analysis shows that they also bring the agent closer to a model of human players.",
"title": ""
},
{
"docid": "065eb4ca2fbef1a8d0d4029b178a0c98",
"text": "Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in its early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for the highly equipped environment. The recent advancements in computerized solutions for this diagnosis are highly promising with improved accuracy and efficiency. In this article, a method for the identification and classification of the lesion based on probabilistic distribution and best features selection is proposed. The probabilistic distribution such as normal distribution and uniform distribution are implemented for segmentation of lesion in the dermoscopic images. Then multi-level features are extracted and parallel strategy is performed for fusion. A novel entropy-based method with the combination of Bhattacharyya distance and variance are calculated for the selection of best features. Only selected features are classified using multi-class support vector machine, which is selected as a base classifier. The proposed method is validated on three publicly available datasets such as PH2, ISIC (i.e. ISIC MSK-2 and ISIC UDA), and Combined (ISBI 2016 and ISBI 2017), including multi-resolution RGB images and achieved accuracy of 97.5%, 97.75%, and 93.2%, respectively. The base classifier performs significantly better on proposed features fusion and selection method as compared to other methods in terms of sensitivity, specificity, and accuracy. Furthermore, the presented method achieved satisfactory segmentation results on selected datasets.",
"title": ""
},
{
"docid": "31dfedb06716502fcf33871248fd7e9e",
"text": "Multi-sensor precipitation datasets including two products from the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and estimates from Climate Prediction Center Morphing Technique (CMORPH) product were quantitatively evaluated to study the monsoon variability over Pakistan. Several statistical and graphical techniques are applied to illustrate the nonconformity of the three satellite products from the gauge observations. During the monsoon season (JAS), the three satellite precipitation products captures the intense precipitation well, all showing high correlation for high rain rates (>30 mm/day). The spatial and temporal satellite rainfall error variability shows a significant geo-topography dependent distribution, as all the three products overestimate over mountain ranges in the north and coastal region in the south parts of Indus basin. The TMPA-RT product tends to overestimate light rain rates (approximately 100%) and the bias is low for high rain rates (about ±20%). In general, daily comparisons from 2005 to 2010 show the best agreement between the TMPA-V7 research product and gauge observations with correlation coefficient values ranging from moderate (0.4) to high (0.8) over the spatial domain of Pakistan. The seasonal variation of rainfall frequency has large biases (100–140%) over high latitudes (36N) with complex terrain for daily, monsoon, and pre-monsoon comparisons. Relatively low uncertainties and errors (Bias ±25% and MAE 1–10 mm) were associated with the TMPA-RT product during the monsoon-dominated region (32–35N), thus demonstrating their potential use for developing an operational hydrological application of the satellite-based near real-time products in Pakistan for flood monitoring. 2014 COSPAR. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "db9ff7ade6b863707bf595e2b866745b",
"text": "Pneumatic devices require tight tolerances to keep them leak-free. Specialized companies offer various off-the-shelf devices, while these work well for many applications, there are also situations where custom design and production of pneumatic parts are desired. Cost efficiency, design flexibility, rapid prototyping, and MRI compatibility requirements are reasons why we investigated a method to design and produce different pneumatic devices using a laser cutter from acrylic, acetal, and rubber-like materials. The properties of the developed valves, pneumatic cylinders, and stepper motors were investigated. At 4-bar working pressure, the 4/3-way valves are capable of 5-Hz switching frequency and provide at most 22-L/min airflow. The pneumatic cylinder delivers 48 N of force, the acrylic stepper motor 30 N. The maximum switching frequency over 6-m long transmission lines is 4.5 Hz, using 2-mm tubing. A MRI-compatible robotic biopsy system driven by the pneumatic stepper motors is also demonstrated. We have shown that it is possible to construct pneumatic devices using laser-cutting techniques. This way, plastic MRI-compatible cylinders, stepper motors, and valves can be developed. Provided that a laser-cutting machine is available, the described pneumatic devices can be fabricated within hours at relatively low cost, making it suitable for rapid prototyping applications.",
"title": ""
},
{
"docid": "d9366c0456eedecd396a9aa1dbc31e35",
"text": "A connectionist model is presented, the TraceLink model, that implements an autonomous \"off-line\" consolidation process. The model consists of three subsystems: (1) a trace system (neocortex), (2) a link system (hippocampus and adjacent regions), and (3) a modulatory system (basal forebrain and other areas). The model is able to account for many of the characteristics of anterograde and retrograde amnesia, including Ribot gradients, transient global amnesia, patterns of shrinkage of retrograde amnesia, and correlations between anterograde and retrograde amnesia or the absence thereof (e.g., in isolated retrograde amnesia). In addition, it produces normal forgetting curves and can exhibit permastore. It also offers an explanation for the advantages of learning under high arousal for long-term retention.",
"title": ""
},
{
"docid": "15ba6a0a5ce45fbecf33bff5d2194250",
"text": "Recently, pathological diagnosis plays a crucial role in many areas of medicine, and some researchers have proposed many models and algorithms for improving classification accuracy by extracting excellent feature or modifying the classifier. They have also achieved excellent results on pathological diagnosis using tongue images. However, pixel values can't express intuitive features of tongue images and different classifiers for training samples have different adaptability. Accordingly, this paper presents a robust approach to infer the pathological characteristics by observing tongue images. Our proposed method makes full use of the local information and similarity of tongue images. Firstly, tongue images in RGB color space are converted to Lab. Then, we compute tongue statistics information. In the calculation process, Lab space dictionary is created at first, through it, we compute statistic value for each dictionary value. After that, a method based on Doublets is taken for feature optimization. At last, we use XGBOOST classifier to predict the categories of tongue images. We achieve classification accuracy of 95.39% using statistics feature and the improved classifier, which is helpful for TCM (Traditional Chinese Medicine) diagnosis.",
"title": ""
},
{
"docid": "b5b8ae3b7b307810e1fe39630bc96937",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "70a970138428aeb06c139abb893a56a9",
"text": "Two sequentially rotated, four stage, wideband circularly polarized high gain microstrip patch array antennas at Ku-band are investigated and compared by incorporating both unequal and equal power division based feeding networks. Four stages of sequential rotation is used to create 16×16 patch array which provides wider common bandwidth between the impedance matching (S11 < −10dB), 3dB axial ratio and 3dB gain of 12.3% for the equal power divider based feed array and 13.2% for the unequal power divider based feed array in addition to high polarization purity. The high peak gain of 28.5dBic is obtained for the unequal power division feed based array antennas compared to 26.8dBic peak gain in the case of the equal power division based feed array antennas. The additional comparison between two feed networks based arrays reveals that the unequal power divider based array antennas provide better array characteristics than the equal power divider based feed array antennas.",
"title": ""
},
{
"docid": "ae43fc77cfe3e88f00a519744407eed7",
"text": "In this work we use the recent advances in representation learning to propose a neural architecture for the problem of natural language inference. Our approach is aligned to mimic how a human does the natural language inference process given two statements. The model uses variants of Long Short Term Memory (LSTM), attention mechanism and composable neural networks, to carry out the task. Each part of our model can be mapped to a clear functionality humans do for carrying out the overall task of natural language inference. The model is end-to-end differentiable enabling training by stochastic gradient descent. On Stanford Natural Language Inference(SNLI) dataset, the proposed model achieves better accuracy numbers than all published models in literature.",
"title": ""
},
{
"docid": "5de07054546347e150aeabe675234966",
"text": "Smart farming is seen to be the future of agriculture as it produces higher quality of crops by making farms more intelligent in sensing its controlling parameters. Analyzing massive amount of data can be done by accessing and connecting various devices with the help of Internet of Things (IoT). However, it is not enough to have an Internet support and self-updating readings from the sensors but also to have a self-sustainable agricultural production with the use of analytics for the data to be useful. This study developed a smart hydroponics system that is used in automating the growing process of the crops using exact inference in Bayesian Network (BN). Sensors and actuators are installed in order to monitor and control the physical events such as light intensity, pH, electrical conductivity, water temperature, and relative humidity. The sensor values gathered were used to build the Bayesian Network in order to infer the optimum value for each parameter. A web interface is developed wherein the user can monitor and control the farm remotely via the Internet. Results have shown that the fluctuations in terms of the sensor values were minimized in the automatic control using BN as compared to the manual control. The yielded crop on the automatic control was 66.67% higher than the manual control which implies that the use of exact inference in BN aids in producing high-quality crops. In the future, the system can use higher data analytics and longer data gathering to improve the accuracy of inference.",
"title": ""
},
{
"docid": "c2ac1c1f08e7e4ccba14ea203acba661",
"text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.",
"title": ""
},
{
"docid": "4768b338044e38949f50c5856bc1a07c",
"text": "Radio-frequency identification (RFID) technology provides an effective tool for managing traceability along food supply chains. This is because it allows automatic digital registration of data, and therefore reduces errors and enables the availability of information on demand. A complete traceability system can be developed in the wine production sector by joining this technology with the use of wireless sensor networks for monitoring at the vineyards. A proposal of such a merged solution for a winery in Spain has been designed, deployed in an actual environment, and evaluated. It was shown that the system could provide a competitive advantage to the company by improving visibility of the processes performed and the associated control over product quality. Much emphasis has been placed on minimizing the impact of the new system in the current activities.",
"title": ""
}
] |
scidocsrr
|
302b33b7f7abe43e01027e16fe586812
|
Is the Implicit Association Test a Valid and Valuable Measure of Implicit Consumer Social Cognition ?
|
[
{
"docid": "eed70d4d8bfbfa76382bfc32dd12c3db",
"text": "Three studies tested basic assumptions derived from a theoretical model based on the dissociation of automatic and controlled processes involved in prejudice. Study 1 supported the model's assumption that highand low-prejudice persons are equally knowledgeable of the cultural stereotype. The model suggests that the stereotype is automatically activated in the presence of a member (or some symbolic equivalent) of the stereotyped group and that low-prejudice responses require controlled inhibition of the automatically activated stereotype. Study 2, which examined the effects of automatic stereotype activation on the evaluation of ambiguous stereotype-relevant behaviors performed by a race-unspecified person, suggested that when subjects' ability to consciously monitor stereotype activation is precluded, both highand low-prejudice subjects produce stereotype-congruent evaluations of ambiguous behaviors. Study 3 examined highand low-prejudice subjects' responses in a consciously directed thought-listing task. Consistent with the model, only low-prejudice subjects inhibited the automatically activated stereotype-congruent thoughts and replaced them with thoughts reflecting equality and negations of the stereotype. The relation between stereotypes and prejudice and implications for prejudice reduction are discussed.",
"title": ""
},
{
"docid": "6d5bb9f895461b3bd7ee82041c3db6aa",
"text": "Respondents at an Internet site completed over 600,000 tasks between October 1998 and April 2000 measuring attitudes toward and stereotypes of social groups. Their responses demonstrated, on average, implicit preference for White over Black and young over old and stereotypic associations linking male terms with science and career and female terms with liberal arts and family. The main purpose was to provide a demonstration site at which respondents could experience their implicit attitudes and stereotypes toward social groups. Nevertheless, the data collected are rich in information regarding the operation of attitudes and stereotypes, most notably the strength of implicit attitudes, the association and dissociation between implicit and explicit attitudes, and the effects of group membership on attitudes and stereotypes.",
"title": ""
}
] |
[
{
"docid": "5d91c93728632586a63634c941420c64",
"text": "A new method for analyzing analog single-event transient (ASET) data has been developed. The approach allows for quantitative error calculations, given device failure thresholds. The method is described and employed in the analysis of an OP-27 op-amp.",
"title": ""
},
{
"docid": "b59f429192a680c1dc07580d21f9e374",
"text": "Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes, (2) stole existing door lock codes, (3) disabled vacation mode of the home, and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.",
"title": ""
},
{
"docid": "d6adda476cc8bd69c37bd2d00f0dace4",
"text": "The conceptualization of a distinct construct known as statistics anxiety has led to the development of numerous rating scales, including the Statistical Anxiety Rating Scale (STARS), designed to assess levels of statistics anxiety. In the current study, the STARS was administered to a sample of 423 undergraduate and graduate students from a midsized, western United States university. The Rasch measurement rating scale model was used to analyze scores from the STARS. Misfitting items were removed from the analysis. In general, items from the six subscales represented a broad range of abilities, with the major exception being a lack of items at the lower extremes of the subscales. Additionally, a differential item functioning (DIF) analysis was performed across sex and student classification. Several items displayed DIF, which indicates subgroups may ascribe different meanings to those items. The paper concludes with several recommendations for researchers considering using the STARS.",
"title": ""
},
{
"docid": "0899cfa62ccd036450c079eb3403902a",
"text": "Manual editing of a metro map is essential because many aesthetic and readability demands in map generation cannot be achieved by using a fully automatic method. In addition, a metro map should be updated when new metro lines are developed in a city. Considering that manually designing a metro map is time-consuming and requires expert skills, we present an interactive editing system that considers human knowledge and adjusts the layout to make it consistent with user expectations. In other words, only a few stations are controlled and the remaining stations are relocated by our system. Our system supports both curvilinear and octilinear layouts when creating metro maps. It solves an optimization problem, in which even spaces, route straightness, and maximum included angles at junctions are considered to obtain a curvilinear result. The system then rotates each edge to extend either vertically, horizontally, or diagonally while approximating the station positions provided by users to generate an octilinear layout. Experimental results, quantitative and qualitative evaluations, and user studies show that our editing system is easy to use and allows even non-professionals to design a metro map.",
"title": ""
},
{
"docid": "95d6189ba97f15c7cc33028f13f8789f",
"text": "This paper presents a new Bayesian nonnegative matrix factorization (NMF) for monaural source separation. Using this approach, the reconstruction error based on NMF is represented by a Poisson distribution, and the NMF parameters, consisting of the basis and weight matrices, are characterized by the exponential priors. A variational Bayesian inference procedure is developed to learn variational parameters and model parameters. The randomness in separation process is faithfully represented so that the system robustness to model variations in heterogeneous environments could be achieved. Importantly, the exponential prior parameters are used to impose sparseness in basis representation. The variational lower bound of log marginal likelihood is adopted as the objective to control model complexity. The dependencies of variational objective on model parameters are fully characterized in the derived closed-form solution. A clustering algorithm is performed to find the groups of bases for unsupervised source separation. The experiments on speech/music separation and singing voice separation show that the proposed Bayesian NMF (BNMF) with adaptive basis representation outperforms the NMF with fixed number of bases and the other BNMFs in terms of signal-to-distortion ratio and the global normalized source to distortion ratio.",
"title": ""
},
{
"docid": "7e2ba771e25a2e6716ce59522ace2835",
"text": "Online debate sites are a large source of informal and opinion-sharing dialogue on current socio-political issues. Inferring users’ stance (PRO or CON) towards discussion topics in domains such as politics or news is an important problem, and is of utility to researchers, government organizations, and companies. Predicting users’ stance supports identification of social and political groups, building of better recommender systems, and personalization of users’ information preferences to their ideological beliefs. In this paper, we develop a novel collective classification approach to stance classification, which makes use of both structural and linguistic features, and which collectively labels the posts’ stance across a network of the users’ posts. We identify both linguistic features of the posts and features that capture the underlying relationships between posts and users. We use probabilistic soft logic (PSL) (Bach et al., 2013) to model post stance by leveraging both these local linguistic features as well as the observed network structure of the posts to reason over the dataset. We evaluate our approach on 4FORUMS (Walker et al., 2012b), a collection of discussions from an online debate site on issues ranging from gun control to gay marriage. We show that our collective classification model is able to easily incorporate rich, relational information and outperforms a local model which uses only linguistic information.",
"title": ""
},
{
"docid": "6dfb62138ad7e0c23826a2c6b7c2507e",
"text": "End-to-end speech recognition systems have been successfully designed for English. Taking into account the distinctive characteristics between Chinese Mandarin and English, it is worthy to do some additional work to transfer these approaches to Chinese. In this paper, we attempt to build a Chinese speech recognition system using end-to-end learning method. The system is based on a combination of deep Long Short-Term Memory Projected (LSTMP) network architecture and the Connectionist Temporal Classification objective function (CTC). The Chinese characters (the number is about 6,000) are used as the output labels directly. To integrate language model information during decoding, the CTC Beam Search method is adopted and optimized to make it more effective and more efficient. We present the first-pass decoding results which are obtained by decoding from scratch using CTC-trained network and language model. Although these results are not as good as the performance of DNN-HMMs hybrid system, they indicate that it is feasible to choose Chinese characters as the output alphabet in the end-toend speech recognition system.",
"title": ""
},
{
"docid": "5bee78694f3428d3882e27000921f501",
"text": "We introduce a new approach to perform background subtraction in moving camera scenarios. Unlike previous treatments of the problem, we do not restrict the camera motion or the scene geometry. The proposed approach relies on Bayesian selection of the transformation that best describes the geometric relation between consecutive frames. Based on the selected transformation, we propagate a set of learned background and foreground appearance models using a single or a series of homography transforms. The propagated models are subjected to MAP-MRF optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experimental evaluation with challenging videos shows that the proposed method outperforms the baseline and state-of-the-art methods in most cases.",
"title": ""
},
{
"docid": "764840c288985e0257413c94205d2bf2",
"text": "Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.",
"title": ""
},
{
"docid": "2c2daf28c81e7f12113a391835961981",
"text": "We address the problem of generating images across two drastically different views, namely ground (street) and aerial (overhead) views. Image synthesis by itself is a very challenging computer vision task and is even more so when generation is conditioned on an image in another view. Due the difference in viewpoints, there is small overlapping field of view and little common content between these two views. Here, we try to preserve the pixel information between the views so that the generated image is a realistic representation of cross view input image. For this, we propose to use homography as a guide to map the images between the views based on the common field of view to preserve the details in the input image. We then use generative adversarial networks to inpaint the missing regions in the transformed image and add realism to it. Our exhaustive evaluation and model comparison demonstrate that utilizing geometry constraints adds fine details to the generated images and can be a better approach for cross view image synthesis than purely pixel based synthesis methods.",
"title": ""
},
{
"docid": "26d20cd47dfd174ecb8606b460c1c040",
"text": "In this article, we use an automated bottom-up approach to identify semantic categories in an entire corpus. We conduct an experiment using a word vector model to represent the meaning of words. The word vectors are then clustered, giving a bottom-up representation of semantic categories. Our main finding is that the likelihood of changes in a word’s meaning correlates with its position within its cluster.",
"title": ""
},
{
"docid": "5cb44c68cecb0618be14cd52182dc96e",
"text": "Recognition of objects using Deep Neural Networks is an active area of research and many breakthroughs have been made in the last few years. The paper attempts to indicate how far this field has progressed. The paper briefly describes the history of research in Neural Networks and describe several of the recent advances in this field. The performances of recently developed Neural Network Algorithm over benchmark datasets have been tabulated. Finally, some the applications of this field have been provided.",
"title": ""
},
{
"docid": "ff76b52f7859aaffa58307018edb8323",
"text": "Malevolent Trojan circuits inserted by layout modifications in an IC at untrustworthy fabrication facilities are difficult to detect by traditional post-manufacturing testing. In this paper, we develop a novel low-overhead design methodology that facilitates the detection of inserted Trojan hardware in an IC through logic testing. As a byproduct, it also increases the security of the design by design obfuscation. Application of the proposed design methodology to an 8-bit RISC processor and a JPEG encoder resulted in improvement in Trojan detection probability significantly. It also obfuscated the design with verification mismatch for 90% of the verification points, while incurring moderate area, power and delay overheads.",
"title": ""
},
{
"docid": "486d31b962600141ba75dfde718f5b3d",
"text": "The design, fabrication, and measurement of a coax to double-ridged waveguide launcher and horn antenna is presented. The novel launcher design employs two symmetric field probes across the ridge gap to minimize spreading inductance in the transition, and achieves better than 15 dB return loss over a 10:1 bandwidth. The aperture-matched horn uses a half-cosine transition into a linear taper for the outer waveguide dimensions and ridge width, and a power-law scaled gap to realize monotonically varying cutoff frequencies, thus avoiding the appearance of trapped mode resonances. It achieves a nearly constant beamwidth in both E- and H-planes for an overall directivity of about 16.5 dB from 10-100 GHz.",
"title": ""
},
{
"docid": "970a76190e980afe51928dcaa6d594c8",
"text": "Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online.",
"title": ""
},
{
"docid": "ad1582fb37440ef7182af4925427f5ca",
"text": "The advent of new information technology has radically changed the end-user computing environment over the past decade. To enhance their management decision-making capability, many organizations have made significant investments in business intelligence (BI) systems. The realization of business benefits from BI investments depends on supporting effective use of BI systems and satisfying their end user requirements. Even though a lot of attention has been paid to the decision-making benefits of BI systems in practice, there is still a limited amount of empirical research that explores the nature of enduser satisfaction with BI systems. End-user satisfaction and system usage have been recognized by many researchers as critical determinants of the success of information systems (IS). As an increasing number of companies have adopted BI systems, there is a need to understand their impact on an individual end-user’s performance. In recent years, researchers have considered assessing individual performance effects from IS use as a key area of concern. Therefore, this study aims to empirically test a framework identifying the relationships between end-user computing satisfaction (EUCS), system usage, and individual performance. Data gathered from 330 end users of BI systems in the Taiwanese electronics industry were used to test the relationships proposed in the framework using the structural equation modeling approach. The results provide strong support for our model. Our results indicate that higher levels of EUCS can lead to increased BI system usage and improved individual performance, and that higher levels of BI system usage will lead to higher levels of individual performance. In addition, this study’s findings, consistent with DeLone and McLean’s IS success model, confirm that there exists a significant positive relationship between EUCS and system usage. Theoretical and practical implications of the findings are discussed. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f3c5a1cef29f5fa834433ce859b15694",
"text": "This paper describes the design, construction, and testing of a 750-V 100-kW 20-kHz bidirectional isolated dual-active-bridge dc-dc converter using four 1.2-kV 400-A SiC-MOSFET/SBD dual modules. The maximum conversion efficiency from the dc-input to the dc-output terminals is accurately measured to be as high as 98.7% at 42-kW operation. The overall power loss at the rated-power (100 kW) operation, excluding the gate-drive and control circuit losses, is divided into the conduction and switching losses produced by the SiC modules, the iron and copper losses due to magnetic devices, and the other unknown loss. The power-loss breakdown concludes that the sum of the conduction and switching losses is about 60% of the overall power loss and that the conduction loss is nearly equal to the switching loss at the 100-kW and 20-kHz operation.",
"title": ""
},
{
"docid": "c1389acb62cca5cb3cfdec34bd647835",
"text": "A Chinese resume information extraction system (CRIES) based on semi-structured text is designed and implemented to obtain formatted information by extracting text content of every field from resumes in different formats and update information automatically based on the web. Firstly, ideas to classify resumes, some constraints obtained by analyzing resume features and overall extraction strategy is introduced. Then two extraction algorithms for parsing resumes in different text formats are given. Consequently, the system was implemented by java programming. Finally, use the system to resolve the resume samples, and the statistical analysis and system optimization analysis are carried out according to the accuracy rate and recall rate of the extracted results.",
"title": ""
},
{
"docid": "53b43126d066f5e91d7514f5da754ef3",
"text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.",
"title": ""
},
{
"docid": "72bc688726c5fc26b2dd7e63d3b28ac0",
"text": "In Convolutional Neural Network (CNN)-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state of-the-art performance on both detection and pose estimation on commonly used benchmarks.",
"title": ""
}
] |
scidocsrr
|
240a10a3748a237c47aff9013c7e3949
|
Examining Spectral Reflectance Saturation in Landsat Imagery and Corresponding Solutions to Improve Forest Aboveground Biomass Estimation
|
[
{
"docid": "59b10765f9125e9c38858af901a39cc7",
"text": "--------__------------------------------------__---------------",
"title": ""
},
{
"docid": "9a4ca8c02ffb45013115124011e7417e",
"text": "Now, we come to offer you the right catalogues of book to open. multisensor data fusion a review of the state of the art is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
}
] |
[
{
"docid": "edeefde21bbe1ace9a34a0ebe7bc6864",
"text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"title": ""
},
{
"docid": "74287743f75368623da74e716ae8e263",
"text": "Organizations increasingly use social media and especially social networking sites (SNS) to support their marketing agenda, enhance collaboration, and develop new capabilities. However, the success of SNS initiatives is largely dependent on sustainable user participation. In this study, we argue that the continuance intentions of users may be gendersensitive. To theorize and investigate gender differences in the determinants of continuance intentions, this study draws on the expectation-confirmation model, the uses and gratification theory, as well as the self-construal theory and its extensions. Our survey of 488 users shows that while both men and women are motivated by the ability to selfenhance, there are some gender differences. Specifically, while women are mainly driven by relational uses, such as maintaining close ties and getting access to social information on close and distant networks, men base their continuance intentions on their ability to gain information of a general nature. Our research makes several contributions to the discourse in strategic information systems literature concerning the use of social media by individuals and organizations. Theoretically, it expands the understanding of the phenomenon of continuance intentions and specifically the role of the gender differences in its determinants. On a practical level, it delivers insights for SNS providers and marketers into how satisfaction and continuance intentions of male and female SNS users can be differentially promoted. Furthermore, as organizations increasingly rely on corporate social networks to foster collaboration and innovation, our insights deliver initial recommendations on how organizational social media initiatives can be supported with regard to gender-based differences. 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b6ec4629a39097178895762a35e0c7eb",
"text": "In this paper, we dedicate to the topic of aspect ranking, which aims to automatically identify important product aspects from online consumer reviews. The important aspects are identified according to two observations: (a) the important aspects of a product are usually commented by a large number of consumers; and (b) consumers’ opinions on the important aspects greatly influence their overall opinions on the product. In particular, given consumer reviews of a product, we first identify the product aspects by a shallow dependency parser and determine consumers’ opinions on these aspects via a sentiment classifier. We then develop an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach. We further apply the aspect ranking results to the application of documentlevel sentiment classification, and improve the performance significantly.",
"title": ""
},
{
"docid": "5b43cce2027f1e5afbf7985ca2d4af1a",
"text": "With Internet delivery of video content surging to an unprecedented level, video has become one of the primary sources for online advertising. In this paper, we present VideoSense as a novel contextual in-video advertising system, which automatically associates the relevant video ads and seamlessly inserts the ads at the appropriate positions within each individual video. Unlike most video sites which treat video advertising as general text advertising by displaying video ads at the beginning or the end of a video or around a video, VideoSense aims to embed more contextually relevant ads at less intrusive positions within the video stream. Specifically, given a Web page containing an online video, VideoSense is able to extract the surrounding text related to this video, detect a set of candidate ad insertion positions based on video content discontinuity and attractiveness, select a list of relevant candidate ads according to multimodal relevance. To support contextual advertising, we formulate this task as a nonlinear 0-1 integer programming problem by maximizing contextual relevance while minimizing content intrusiveness at the same time. The experiments proved the effectiveness of VideoSense for online video service.",
"title": ""
},
{
"docid": "b5de3747c17f6913539b62377f9af5c4",
"text": "In this paper, we propose a novel embedding model, named ConvKB, for knowledge base completion. Our model ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases. In ConvKB, each triple (head entity, relation, tail entity) is represented as a 3-column matrix where each column vector represents a triple element. This 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps. These feature maps are then concatenated into a single feature vector representing the input triple. The feature vector is multiplied with a weight vector via a dot product to return a score. This score is then used to predict whether the triple is valid or not. Experiments show that ConvKB obtains better link prediction and triple classification results than previous state-of-the-art models on benchmark datasets WN18RR, FB15k-237, WN11 and FB13. We further apply our ConvKB to a search personalization problem which aims to tailor the search results to each specific user based on the user’s personal interests and preferences. In particular, we model the potential relationship between the submitted query, the user and the search result (i.e., document) as a triple (query, user, document) on which the ConvKB is able to work. Experimental results on query logs from a commercial web search engine show that ConvKB achieves better performances than the standard ranker as well as strong search personalization baselines.",
"title": ""
},
{
"docid": "32a2bfb7a26631f435f9cb5d825d8da2",
"text": "An important aspect for the task of grammatical error correction (GEC) that has not yet been adequately explored is adaptation based on the native language (L1) of writers, despite the marked influences of L1 on second language (L2) writing. In this paper, we adapt a neural network joint model (NNJM) using L1-specific learner text and integrate it into a statistical machine translation (SMT) based GEC system. Specifically, we train an NNJM on general learner text (not L1-specific) and subsequently train on L1-specific data using a Kullback-Leibler divergence regularized objective function in order to preserve generalization of the model. We incorporate this adapted NNJM as a feature in an SMT-based English GEC system and show that adaptation achieves significant F0.5 score gains on English texts written by L1 Chinese, Russian, and Spanish writers.",
"title": ""
},
{
"docid": "15ada8f138d89c52737cfb99d73219f0",
"text": "A dual-band circularly polarized stacked annular-ring patch antenna is presented in this letter. This antenna operates at both the GPS L1 frequency of 1575 MHz and L2 frequency of 1227 MHz, whose frequency ratio is about 1.28. The proposed antenna is formed by two concentric annular-ring patches that are placed on opposite sides of a substrate. Wide axial-ratio bandwidths (larger than 2%), determined by 3-dB axial ratio, are achieved at both bands. The measured gains at 1227 and 1575 MHz are about 6 and 7 dBi, respectively, with the loss of substrate taken into consideration. Both simulated and measured results are presented. The method of varying frequency ratio is also discussed.",
"title": ""
},
{
"docid": "8e794530be184686a49e5ced6ac6521d",
"text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.",
"title": ""
},
{
"docid": "eb6823bcc7e01dbdc9a21388bde0ce4f",
"text": "This paper extends previous research on two approaches to human-centred automation: (1) intermediate levels of automation (LOAs) for maintaining operator involvement in complex systems control and facilitating situation awareness; and (2) adaptive automation (AA) for managing operator workload through dynamic control allocations between the human and machine over time. Some empirical research has been conducted to examine LOA and AA independently, with the objective of detailing a theory of human-centred automation. Unfortunately, no previous work has studied the interaction of these two approaches, nor has any research attempted to systematically determine which LOAs should be used in adaptive systems and how certain types of dynamic function allocations should be scheduled over time. The present research briefly reviews the theory of humancentred automation and LOA and AA approaches. Building on this background, an initial study was presented that attempts to address the conjuncture of these two approaches to human-centred automation. An experiment was conducted in which a dual-task scenario was used to assess the performance, SA and workload effects of low, intermediate and high LOAs, which were dynamically allocated (as part of an AA strategy) during manual system control for various cycle times comprising 20, 40 and 60% of task time. The LOA and automation allocation cycle time (AACT) combinations were compared to completely manual control and fully automated control of a dynamic control task performed in conjunction with an embedded secondary monitoring task. Results revealed LOA to be the driving factor in determining primary task performance and SA. Low-level automation produced superior performance and intermediate LOAs facilitated higher SA, but this was not associated with improved performance or reduced workload. The AACT was the driving factor in perceptions of primary task workload and secondary task performance. When a greater percentage of primary task time was automated, operator perceptual resources were freed-up and monitoring performance on the secondary task improved. Longer automation cycle times than have previously been studied may have benefits for overall human–machine system performance. The combined effect of LOA and AA on all measures did not appear to be ‘additive’ in nature. That is, the LOA producing the best performance (low level automation) did not do so at the AACT, which produced superior performance (maximum cycle time). In general, the results are supportive of intermediate LOAs and AA as approaches to human-centred automation, but each appears to provide different benefits to human–machine system performance. This work provides additional information for a developing theory of human-centred automation. Theor. Issues in Ergon. Sci., 2003, 1–40, preview article",
"title": ""
},
{
"docid": "2fe1ed0f57e073372e4145121e87d7c6",
"text": "Information visualization (InfoVis), the study of transforming data, information, and knowledge into interactive visual representations, is very important to users because it provides mental models of information. The boom in big data analytics has triggered broad use of InfoVis in a variety of domains, ranging from finance to sports to politics. In this paper, we present a comprehensive survey and key insights into this fast-rising area. The research on InfoVis is organized into a taxonomy that contains four main categories, namely empirical methodologies, user interactions, visualization frameworks, and applications, which are each described in terms of their major goals, fundamental principles, recent trends, and state-of-the-art approaches. At the conclusion of this survey, we identify existing technical challenges and propose directions for future research.",
"title": ""
},
{
"docid": "a28c252f9f3e96869c72e6e41146b5bc",
"text": "Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance.",
"title": ""
},
{
"docid": "040329beb0f4688ced46d87a51dac169",
"text": "We present a characterization methodology for fast direct measurement of the charge accumulated on Floating Gate (FG) transistors of Flash EEPROM cells. Using a Scanning Electron Microscope (SEM) in Passive Voltage Contrast (PVC) mode we were able to distinguish between '0' and '1' bit values stored in each memory cell. Moreover, it was possible to characterize the remaining charge on the FG; thus making this technique valuable for Failure Analysis applications for data retention measurements in Flash EEPROM. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Only a relatively simple backside sample preparation is necessary for accessing the FG of memory transistors. The technique presented was successfully implemented on a 0.35 μm technology node microcontroller and a 0.21 μm smart card integrated circuit. We also show the ease of such technique to cover all cells of a memory (using intrinsic features of SEM) and to automate memory cells characterization using standard image processing technique.",
"title": ""
},
{
"docid": "067e24b29aae26865c858d6b8e60b135",
"text": "In this paper, we present an optimization path of stress memorization technique (SMT) for 45nm node and below using a nitride capping layer. We demonstrate that the understanding of coupling between nitride properties, dopant activation and poly-silicon gate mechanical stress allows enhancing nMOS performance by 7% without pMOS degradation. In contrast to previously reported works on SMT (Chen et al., 2004) - (Singh et al., 2005), a low-cost process compatible with consumer electronics requirements has been successfully developed",
"title": ""
},
{
"docid": "715fda02bad1633be9097cc0a0e68c8d",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "26b13a3c03014fc910ed973c264e4c9d",
"text": "Deep convolutional neural networks (CNNs) have shown great potential for numerous real-world machine learning applications, but performing inference in large CNNs in real-time remains a challenge. We have previously demonstrated that traditional CNNs can be converted into deep spiking neural networks (SNNs), which exhibit similar accuracy while reducing both latency and computational load as a consequence of their data-driven, event-based style of computing. Here we provide a novel theory that explains why this conversion is successful, and derive from it several new tools to convert a larger and more powerful class of deep networks into SNNs. We identify the main sources of approximation errors in previous conversion methods, and propose simple mechanisms to fix these issues. Furthermore, we develop spiking implementations of common CNN operations such as max-pooling, softmax, and batch-normalization, which allow almost loss-less conversion of arbitrary CNN architectures into the spiking domain. Empirical evaluation of different network architectures on the MNIST and CIFAR10 benchmarks leads to the best SNN results reported to date.",
"title": ""
},
{
"docid": "82119f5c85eaa2c4a76b2c7b0561375c",
"text": "A system is described that integrates vision and tactile sensing in a robotics environment to perform object recognition tasks. It uses multiple sensor systems (active touch and passive stereo vision) to compute three dimensional primitives that can be matched against a model data base of complex curved surface objects containing holes and cavities. The low level sensing elements provide local surface and feature matches which arc constrained by relational criteria embedded in the models. Once a model has been invoked, a verification procedure establishes confidence measures for a correct recognition. The three dimen* sional nature of the sensed data makes the matching process more robust as does the system's ability to sense visually occluded areas with touch. The model is hierarchic in nature and allows matching at different levels to provide support or inhibition for recognition. 1. INTRODUCTION Robotic systems are being designed and built to perform complex tasks such as object recognition, grasping, parts manipulation, inspection and measurement. In the case of object recognition, many systems have been designed that have tried to exploit a single sensing modality [1,2,3,4,5,6]. Single sensor systems are necessarily limited in their power. The approach described here to overcome the inherent limitations of a single sensing modality is to integrate multiple sensing modalities (passive stereo vision and active tactile sensing) for object recognition. The advantages of multiple sensory systems in a task like this are many. Multiple sensor systems supply redundant and complementary kinds of data that can be integrated to create a more coherent understanding of a scene. The inclusion of multiple sensing systems is becoming more apparent as research continues in distributed systems and parallel approaches to problem solving. The redundancy and support for a hypothesis that comes from more than one sensing subsystem is important in establishing confidence measures during a recognition process, just as the disagreement between two sensors will inhibit a hypothesis and point to possible sensing or reasoning error. The complementary nature of these sensors allows more powerful matching primitives to be used. The primitives that are the outcome of sensing with these complementary sensors are throe dimensional in nature, providing stronger invariants and a more natural way to recognize objects which are also three dimensional in nature [7].",
"title": ""
},
{
"docid": "ed22fe0d13d4450005abe653f41df2c0",
"text": "Polycystic ovary syndrome (PCOS) is a complex endocrine disorder affecting 5-10 % of women of reproductive age. It generally manifests with oligo/anovulatory cycles, hirsutism and polycystic ovaries, together with a considerable prevalence of insulin resistance. Although the aetiology of the syndrome is not completely understood yet, PCOS is considered a multifactorial disorder with various genetic, endocrine and environmental abnormalities. Moreover, PCOS patients have a higher risk of metabolic and cardiovascular diseases and their related morbidity, if compared to the general population.",
"title": ""
},
{
"docid": "d07281bab772b6ba613f9526d418661e",
"text": "GSM (Global Services of Mobile Communications) 1800 licenses were granted in the beginning of the 2000’s in Turkey. Especially in the installation phase of the wireless telecom services, fraud usage can be an important source of revenue loss. Fraud can be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is the name of the activities to identify unauthorized usage and prevent losses for the mobile network operators’. Mobile phone user’s intentions may be predicted by the call detail records (CDRs) by using data mining (DM) techniques. This study compares various data mining techniques to obtain the best practical solution for the telecom fraud detection and offers the Adaptive Neuro Fuzzy Inference (ANFIS) method as a means to efficient fraud detection. In the test run, shown that ANFIS has provided sensitivity of 97% and specificity of 99%, where it classified 98.33% of the instances correctly.",
"title": ""
},
{
"docid": "0e2a2a32923d8e9fa5779e80e6090dba",
"text": "The most powerful and common approach to countering the threats to network / information security is encryption [1]. Even though it is very powerful, the cryptanalysts are very intelligent and they were working day and night to break the ciphers. To make a stronger cipher it is recommended that to use: More stronger and complicated encryption algorithms, Keys with more number of bits (Longer keys), larger block size as input to process, use authentication and confidentiality and secure transmission of keys. It is for sure that if we follow all the mentioned principles we can make a very stronger cipher. With this we have the following problems: It is a time consuming process for both encryption and decryption, It is difficult for the crypt analyzer to analyze the problem. Also suffers with the problems in the existing system. The main objective of this paper is to solve all these problems and to bring the revolution in the Network security with a new substitution technique [3] is ‘color substitution technique’ and named as a “Play color cipher”.",
"title": ""
}
] |
scidocsrr
|
3e7af8497d080d88c7873de1ca8a4027
|
Natural Language Semantics Using Probabilistic Logic
|
[
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "70fd543752f17237386b3f8e99954230",
"text": "Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency",
"title": ""
}
] |
[
{
"docid": "11f2adab1fb7a93e0c9009a702389af1",
"text": "OBJECTIVE\nThe authors present clinical outcome data and satisfaction of patients who underwent minimally invasive vertebral body corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach and posterior short-segment instrumentation for lumbar burst fractures.\n\n\nMETHODS\nPatients with unstable lumbar burst fractures who underwent corpectomy and anterior column reconstruction via a mini-open, extreme lateral, transpsoas approach with short-segment posterior fixation were reviewed retrospectively. Demographic information, operative parameters, perioperative radiographic measurements, and complications were analyzed. Patient-reported outcome instruments (Oswestry Disability Index [ODI], 12-Item Short Form Health Survey [SF-12]) and an anterior scar-specific patient satisfaction questionnaire were recorded at the latest follow-up.\n\n\nRESULTS\nTwelve patients (7 men, 5 women, average age 42 years, range 22-68 years) met the inclusion criteria. Lumbar corpectomies with anterior column support were performed (L-1, n = 8; L-2, n = 2; L-3, n = 2) and supplemented with short-segment posterior instrumentation (4 open, 8 percutaneous). Four patients had preoperative neurological deficits, all of which improved after surgery. No new neurological complications were noted. The anterior incision on average was 6.4 cm (range 5-8 cm) in length, caused mild pain and disability, and was aesthetically acceptable to the large majority of patients. Three patients required chest tube placement for pleural violation, and 1 patient required reoperation for cage subsidence/hardware failure. Average clinical follow-up was 38 months (range 16-68 months), and average radiographic follow-up was 37 months (range 6-68 months). Preoperative lumbar lordosis and focal lordosis were significantly improved/maintained after surgery. Patients were satisfied with their outcomes, had minimal/moderate disability (average ODI score 20, range 0-52), and had good physical (SF-12 physical component score 41.7% ± 10.4%) and mental health outcomes (SF-12 mental component score 50.2% ± 11.6%) after surgery.\n\n\nCONCLUSIONS\nAnterior corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach supplemented by short-segment posterior instrumentation is a safe, effective alternative to conventional approaches in the treatment of single-level unstable burst fractures and is associated with excellent functional outcomes and patient satisfaction.",
"title": ""
},
{
"docid": "f5b372607a89ea6595683276e48d6dce",
"text": "In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications.",
"title": ""
},
{
"docid": "9228218e663951e54f31d697997c80f9",
"text": "In this paper, we describe a simple set of \"recipes\" for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.",
"title": ""
},
{
"docid": "de682d74b30e699d7185765f8b235e00",
"text": "A key goal of research in conversational systems is to train an interactive agent to help a user with a task. Human conversation, however, is notoriously incomplete, ambiguous, and full of extraneous detail. To operate effectively, the agent must not only understand what was explicitly conveyed but also be able to reason in the presence of missing or unclear information. When unable to resolve ambiguities on its own, the agent must be able to ask the user for the necessary clarifications and incorporate the response in its reasoning. Motivated by this problem we introduce QRAQ (Query, Reason, and Answer Questions), a new synthetic domain, in which a User gives an Agent a short story and asks a challenge question. These problems are designed to test the reasoning and interaction capabilities of a learningbased Agent in a setting that requires multiple conversational turns. A good Agent should ask only non-deducible, relevant questions until it has enough information to correctly answer the User’s question. We use standard and improved reinforcement learning based memory-network architectures to solve QRAQ problems in the difficult setting where the reward signal only tells the Agent if its final answer to the challenge question is correct or not. To provide an upper-bound to the RL results we also train the same architectures using supervised information that tells the Agent during training which variables to query and the answer to the challenge question. We evaluate our architectures on four QRAQ dataset types, and scale the complexity for each along multiple dimensions.",
"title": ""
},
{
"docid": "753dcf47f0d1d63d2b93a8f4b5d78a33",
"text": "BACKGROUND\nTrichostasis spinulosa (TS) is a common, underdiagnosed cosmetic skin condition.\n\n\nOBJECTIVES\nThe main objectives of this study were to determine the occurrence of TS relative to age and gender, to analyze its cutaneous distribution, and to investigate any possible familial basis for this condition, its impact on patients, and the types and efficacy of previous treatments.\n\n\nMETHODS\nAll patients presenting to the outpatient dermatology clinic at the study institution and their relatives were examined for the presence of TS and were questioned about family history and previous treatment. Photographs and biopsies of suspected cases of TS were obtained.\n\n\nRESULTS\nOf 2400 patients seen between August and December 2013, 286 patients were diagnosed with TS (135 males, 151 females; prevalence: 11.9%). Women presented more frequently than men with complaints of TS (6.3 vs. 4.2%), and more women had received prior treatment for TS (10.5 vs. 2.8%). The most commonly affected sites were the face (100%), interscapular area (10.5%), and arms (3.1%). Lesions involved the nasal alae in 96.2%, the nasal tip in 90.9%, the chin in 55.9%, and the cheeks in 52.4% of patients. Only 15.7% of patients had forehead lesions, and only 4.5% had perioral lesions. Among the 38 previously treated patients, 65.8% reported temporary improvement.\n\n\nCONCLUSIONS\nTrichostasis spinulosa is a common condition that predominantly affects the face in patients of all ages. Additional studies employing larger cohorts from multiple centers will be required to determine the prevalence of TS in the general population.",
"title": ""
},
{
"docid": "65b34f78e3b8d54ad75d32cdef487dac",
"text": "Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.",
"title": ""
},
{
"docid": "8cd8fbbc3e20d29989deeb2fd2362c10",
"text": "Modern programming languages and software engineering principles are causing increasing problems for compiler systems. Traditional approaches, which use a simple compile-link-execute model, are unable to provide adequate application performance under the demands of the new conditions. Traditional approaches to interprocedural and profile-driven compilation can provide the application performance needed, but require infeasible amounts of compilation time to build the application. This thesis presents LLVM, a design and implementation of a compiler infrastructure which supports a unique multi-stage optimization system. This system is designed to support extensive interprocedural and profile-driven optimizations, while being efficient enough for use in commercial compiler systems. The LLVM virtual instruction set is the glue that holds the system together. It is a low-level representation, but with high-level type information. This provides the benefits of a low-level representation (compact representation, wide variety of available transformations, etc.) as well as providing high-level information to support aggressive interprocedural optimizations at link-and post-link time. In particular, this system is designed to support optimization in the field, both at run-time and during otherwise unused idle time on the machine. This thesis also describes an implementation of this compiler design, the LLVM compiler infrastructure , proving that the design is feasible. The LLVM compiler infrastructure is a maturing and efficient system, which we show is a good host for a variety of research. More information about LLVM can be found on its web site at: iii Acknowledgments This thesis would not be possible without the support of a large number of people who have helped me both in big ways and little. In particular, I would like to thank my advisor, Vikram Adve, for his support, patience, and especially his trust and respect. He has shown me how to communicate ideas more effectively and how to find important and meaningful topics for research. By being demanding, understanding, and allowing me the freedom to explore my interests, he has driven me to succeed. The inspiration for this work certainly stems from one person: Tanya. She has been a continuous source of support, ideas, encouragement, and understanding. Despite my many late nights, unimaginable amounts of stress, and a truly odd sense of humor, she has not just tolerated me, but loved me. Another person who made this possible, perhaps without truly understanding his contribution, has been Brian Ensink. Brian has been an invaluable sounding board for ideas, a welcoming ear to occasional frustrations, provider …",
"title": ""
},
{
"docid": "4cb0358724add5f51b598b7dd19c3640",
"text": "110 CSEG RECORDER 2006 Special Edition Continued on Page 111 Seismic attributes have come a long way since their intro d u ction in the early 1970s and have become an integral part of seismic interpretation projects. To d a y, they are being used widely for lithological and petrophysical prediction of re s e rvoirs and various methodologies have been developed for their application to broader hydrocarbon exploration and development decision making. Beginning with the digital re c o rding of seismic data in the early 1960s and the ensuing bright spot analysis, the 1970s saw the introduction of complex trace attributes and seismic inversion along with their color displays. This was followed by the development of response attributes, introduction of texture analysis, 2D attributes, horizon and interval attributes and the pervasive use of c o l o r. 3D seismic acquisition dominated the 1990s as the most successful exploration technology of several decades and along with that came the seismic sequence attributes. The c o h e rence technology introduced in the mid 1990s significantly changed the way geophysicists interpreted seismic data. This was followed by the introduction of spectral decomposition in the late 1990s and a host of methods for evaluation of a combination of attributes. These included pattern recognition techniques as well as neural network applications. These developments continued into the new millennium, with enhanced visualization and 3D computation and interpretation of texture and curvature attributes coming to the fore f ront. Of course all this was possible with the power of scientific computing making significant advances during the same period of time. A detailed re c o ns t ruction of these key historical events that lead to the modern seismic attribute analysis may be found in Chopra and Marfurt (2005). The proliferation of seismic attributes in the last two decades has led to attempts to their classification and to bring some order to their chaotic development.",
"title": ""
},
{
"docid": "843ea8a700adf545288175c1062107bb",
"text": "Stress is a natural reaction to various stress-inducing factors which can lead to physiological and behavioral changes. If persists for a longer period, stress can cause harmful effects on our body. The body sensors along with the concept of the Internet of Things can provide rich information about one's mental and physical health. The proposed work concentrates on developing an IoT system which can efficiently detect the stress level of a person and provide a feedback which can assist the person to cope with the stressors. The system consists of a smart band module and a chest strap module which can be worn around wrist and chest respectively. The system monitors the parameters such as Electro dermal activity and Heart rate in real time and sends the data to a cloud-based ThingSpeak server serving as an online IoT platform. The computation of the data is performed using a ‘MATLAB Visualization’ application and the stress report is displayed. The authorized person can log in, view the report and take actions such as consulting a medical person, perform some meditation or yoga exercises to cope with the condition.",
"title": ""
},
{
"docid": "96bd733f9168bed4e400f315c57a48e8",
"text": "New phase transition phenomena have recently been discovered for the stochastic block model, for the special case of two non-overlapping symmetric communities. This gives raise in particular to new algorithmic challenges driven by the thresholds. This paper investigates whether a general phenomenon takes place for multiple communities, without imposing symmetry. In the general stochastic block model SBM(n,p,W), n vertices are split into k communities of relative size {pi}i∈[k], and vertices in community i and j connect independently with probability {Wij}i,j∈[k]. This paper investigates the partial and exact recovery of communities in the general SBM (in the constant and logarithmic degree regimes), and uses the generality of the results to tackle overlapping communities. The contributions of the paper are: (i) an explicit characterization of the recovery threshold in the general SBM in terms of a new f-divergence function D+, which generalizes the Hellinger and Chernoff divergences, and which provides an operational meaning to a divergence function analog to the KL-divergence in the channel coding theorem, (ii) the development of an algorithm that recovers the communities all the way down to the optimal threshold and runs in quasi-linear time, showing that exact recovery has no information-theoretic to computational gap for multiple communities, (iii) the development of an efficient algorithm that detects communities in the constant degree regime with an explicit accuracy bound that can be made arbitrarily close to 1 when a prescribed signal-to-noise ratio [defined in terms of the spectrum of diag(p)W] tends to infinity.",
"title": ""
},
{
"docid": "1f4b3ad078c42404c6aa27d107026b18",
"text": "This paper presents circuit design methodologies to enhance the electromagnetic immunity of an output-capacitor-free low-dropout (LDO) regulator. To evaluate the noise performance of an LDO regulator in the small-signal domain, power-supply rejection (PSR) is used. We optimize a bandgap reference circuit for optimum dc PSR, and propose a capacitor cancelation technique circuit for bandwidth compensation, and a low-noise biasing circuit for immunity enhancement in the bias circuit. For large-signal, transient performance enhancement, we suggest using a unity-gain amplifier to minimize the voltage difference of the differential inputs of the error amplifier, and an auxiliary N-channel metal oxide semiconductor (NMOS) pass transistor was used to maintain a stable gate voltage in the pass transistor. The effectiveness of the design methodologies proposed in this paper is verified using circuit simulations using an LDO regulator designed by 0.18-$\\mu$m CMOS process. When sine and pulse signals are applied to the input, the worst dc offset variations were enhanced from 36% to 16% and from 31.7% to 9.7%, respectively, as compared with those of the conventional LDO. We evaluated the noise performance versus the conducted electromagnetic interference generated by the dc–dc converter; the noise reduction level was significantly improved.",
"title": ""
},
{
"docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442",
"text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.",
"title": ""
},
{
"docid": "1a14570fa1d565aeb78165c72bdf8a4e",
"text": "We investigate the ride-sharing assignment problem from an algorithmic resource allocation point of view. Given a number of requests with source and destination locations, and a number of available car locations, the task is to assign cars to requests with two requests sharing one car. We formulate this as a combinatorial optimization problem, and show that it is NP-hard. We then design an approximation algorithm which guarantees to output a solution with at most 2.5 times the optimal cost. Experiments are conducted showing that our algorithm actually has a much better approximation ratio (around 1.2) on synthetically generated data. Introduction The sharing economy is estimated to grow from $14 billion in 2014 to $335 billion by 2025 (Yaraghi and Ravi 2017). As one of the largest components of sharing economy, ride-sharing provides socially efficient transport services that help to save energy and to reduce congestion. Uber has 40 million monthly active riders reported in October 2016 (Kokalitcheva 2016) and Didi Chuxing has more than 400 million users(Tec 2017). A large portion of the revenue of these companies comes from ride sharing with one car catering two passenger requests, which is the topic investigated in this paper. A typical scenario is as follows: There are a large number of requests with pickup and drop-off location information, and a large number of available cars with current location information. One of the tasks is to assign the requests to the cars, with two requests for one car. The assignment needs to be made socially efficient in the sense that the ride sharing does not incur much extra traveling distance for the drivers or and extra waiting time for the passengers. In this paper we investigate this ride-sharing assignment problem from an algorithmic resource allocation point of view. Formally, suppose that there are a set R of requests {(si, ti) ∈ R : i = 1, . . . ,m} where in request i, an agent is at location si and likes to go to location ti. There are also a set D of taxis {dk ∈ R : k = 1, . . . , n}, with taxi k currently at location dk. The task is to assign two agents i and j to one taxi k, so that the total driving distance is as small as possible. The distance measure d(x, y) here can be Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Manhattan distance (i.e., 1-norm), Euclidean distance (i.e., 2-norm), or distance on graphs if a city map is available. Here for any fixed tuple (k, {i, j}), the driver of taxi k has four possible routes, from the combination of the following two choices: he can pick agent i first or agent j first, and he can drop agent i first or drop agent j first. We assume that the driver is experienced enough to take the best among these four choices. Thus we use the total distance of this best route as the driving cost of tuple (k, {i, j}), denoted by cost(k, {i, j}). We hope to find an assignment M = {(k, {i, j}) : 1 ≤ i, j ≤ m, 1 ≤ k ≤ n} that assigns the maximum number of requests, and in the meanwhile with the cost(M) = ∑ (k,{i,j})∈M cost(k, {i, j}), summation of the driving cost, as small as possible. Here an assignment is a matching in the graph in the sense that each element in R∪D appears at most once in M . In this paper, we formulate this ride-sharing assignment as a combinatorial optimization problem. We show that the problem is NP-hard, and then present an approximation algorithm which, on any input, runs in time O(n) and outputs a solution M with cost(M) at most 2.5 times the optimal value. Our algorithm does not assume specific distance measure; indeed it works for any distance1. We conducted experiments where inputs are generated from uniform distributions and Gaussian mixture distributions. The approximation ratio on these empirical data is about 1.1-1.2, which is much better than the worst case guarantee 2.5. In addition, the results indicate that the larger n and m are, the better the approximation ratio is. Considering that n and m are very large numbers in practice, the performance of our algorithm may be even more satisfactory for practical scenarios. Related Work Ridesharing has become a key feature to increase urban transportation sustainability and is an active field of research. Several pieces of work have looked at dynamic ridesharing (Caramia et al. 2002; Fabri and Recht 2006; Agatz et al. 2012; Santos and Xavier 2013; Alonso-Mora et al. 2017), and multi-hop ridesharing (Herbawi and Weber 2011; Drews and Luxen 2013; Teubner and Flath 2015). That is, the algorithm only needs that d is nonnegative, symmetric and satisfies the triangle inequality. The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)",
"title": ""
},
{
"docid": "448d4704991a2bdc086df8f0d7920ec5",
"text": "Global progress in the industrial field, which has led to the definition of the Industry 4.0 concept, also affects other spheres of life. One of them is the education. The subject of the article is to summarize the emerging trends in education in relation to the requirements of Industry 4.0 and present possibilities of their use. One option is using augmented reality as part of a modular learning system. The main idea is to combine the elements of the CPS technology concept with modern IT features, with emphasis on simplicity of solution and hardware ease. The synthesis of these principles can combine in a single image on a conventional device a realistic view at the technological equipment, complemented with interactive virtual model of the equipment, the technical data and real-time process information.",
"title": ""
},
{
"docid": "218b2f7a8e088c1023202bd27164b780",
"text": "The explanation of crime has been preoccupied with individuals and communities as units of analysis. Recent work on offender decision making (Cornish and Clarke, 1986), situations (Clarke, 1983, 1992), environments (Brantingham and Brantingham 1981, 1993), routine activities (Cohen and Felson, 1979; Felson, 1994), and the spatial organization of drug dealing in the U.S. suggest a new unit of analysis: places. Crime is concentrated heavily in a Jew \"hot spots\" of crime (Sherman et aL 1989). The concentration of crime among repeat places is more intensive than it is among repeat offenders (Spelman and Eck, 1989). The components of this concentration are analogous to the components of the criminal careers of persons: onset, desistance, continuance, specialization, and desistance. The theoretical explanationfor variance in these components is also stronger at the level of places than it is for individuals. These facts suggest a need for rethinking theories of crime, as well as a new approach to theorizing about crime for",
"title": ""
},
{
"docid": "f4ee2fa60eb67b7081085ed222627115",
"text": "Recent advances in deep-learning-based applications have attracted a growing attention from the IoT community. These highly capable learning models have shown significant improvements in expected accuracy of various sensory inference tasks. One important and yet overlooked direction remains to provide uncertainty estimates in deep learning outputs. Since robustness and reliability of sensory inference results are critical to IoT systems, uncertainty estimates are indispensable for IoT applications. To address this challenge, we develop ApDeepSense, an effective and efficient deep learning uncertainty estimation method for resource-constrained IoT devices. ApDeepSense leverages an implicit Bayesian approximation that links neural networks to deep Gaussian processes, allowing output uncertainty to be quantified. Our approach is shown to significantly reduce the execution time and energy consumption of uncertainty estimation thanks to a novel layer-wise approximation that replaces the traditional computationally intensive sampling-based uncertainty estimation methods. ApDeepSense is designed for neural net-works trained using dropout; one of the most widely used regularization methods in deep learning. No additional training is needed for uncertainty estimation purposes. We evaluate ApDeepSense using four IoT applications on Intel Edison devices. Results show that ApDeepSense can reduce around 88.9% of the execution time and 90.0% of the energy consumption, while producing more accurate uncertainty estimates compared with state-of-the-art methods.",
"title": ""
},
{
"docid": "3e2e2aace1ddade88f3c8a6b7157af6b",
"text": "Verb learning is clearly a function of observation of real-world contingencies; however, it is argued that such observational information is insufficient to account fully for vocabulary acquisition. This paper provides an experimental validation of Landau & Gleitman's (1985) syntactic bootstrapping procedure; namely, that children may use syntactic information to learn new verbs. Pairs of actions were presented simultaneously with a nonsense verb in one of two syntactic structures. The actions were subsequently separated, and the children (MA = 2;1) were asked to select which action was the referent for the verb. The children's choice of referent was found to be a function of the syntactic structure in which the verb had appeared.",
"title": ""
},
{
"docid": "24006b9eb670c84904b53320fbedd32c",
"text": "Maturity Models have been introduced, over the last four decades, as guides and references for Information System management in organizations from different sectors of activity. In the healthcare field, Maturity Models have also been used to deal with the enormous complexity and demand of Hospital Information Systems. This article presents a research project that aimed to develop a new comprehensive model of maturity for a health area. HISMM (Hospital Information System Maturity Model) was developed to address a complexity of SIH and intends to offer a useful tool for the demanding role of its management. The HISMM has the peculiarity of congregating a set of key maturity Influence Factors and respective characteristics, enabling not only the assessment of the global maturity of a HIS but also the individual maturity of its different dimensions. In this article, we present the methodology for the development of Maturity Models adopted for the creation of HISMM and the underlying reasons for its choice.",
"title": ""
},
{
"docid": "c0d2fcd6daeb433a5729a412828372f8",
"text": "Most 3D reconstruction approaches passively optimise over all data, exhaustively matching pairs, rather than actively selecting data to process. This is costly both in terms of time and computer resources, and quickly becomes intractable for large datasets. This work proposes an approach to intelligently filter large amounts of data for 3D reconstructions of unknown scenes using monocular cameras. Our contributions are twofold: First, we present a novel approach to efficiently optimise the Next-Best View (NBV) in terms of accuracy and coverage using partial scene geometry. Second, we extend this to intelligently selecting stereo pairs by jointly optimising the baseline and vergence to find the NBV’s best stereo pair to perform reconstruction. Both contributions are extremely efficient, taking 0.8ms and 0.3ms per pose, respectively. Experimental evaluation shows that the proposed method allows efficient selection of stereo pairs for reconstruction, such that a dense model can be obtained with only a small number of images. Once a complete model has been obtained, the remaining computational budget is used to intelligently refine areas of uncertainty, achieving results comparable to state-of-the-art batch approaches on the Middlebury dataset, using as little as 3.8% of the views.",
"title": ""
},
{
"docid": "1de2d4e5b74461c142e054ffd2e62c2d",
"text": "Table : Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of compositional parameters, computational complexity and sequential operations, respectively. v Consider a text sequence represented as X, composed of a sequence of words. Let {v#, v$, ...., v%} denote the respective word embeddings for each token, where L is the sentence/document length; v The compositional function, X → z, aims to combine word embeddings into a fixed-length sentence/document representation z. Typically, LSTM or CNN are employed for this purpose;",
"title": ""
}
] |
scidocsrr
|
c1e2a84ff4366325837e576dd0549e24
|
High gain 2.45 GHz 2×2 patch array stacked antenna
|
[
{
"docid": "3bb4d0f44ed5a2c14682026090053834",
"text": "A Meander Line Antenna (MLA) for 2.45 GHz is proposed. This research focuses on the optimum value of gain and reflection coefficient. Therefore, the MLA's parametric studies is discussed which involved the number of turn, width of feed (W1), length of feed (LI) and vertical length partial ground (L3). As a result, the studies have significantly achieved MLA's gain and reflection coefficient of 3.248dB and -45dB respectively. The MLA also resembles the monopole antenna behavior of Omni-directional radiation pattern. Measured and simulated results are presented. The proposed antenna has big potential to be implemented for WLAN device such as optical mouse application.",
"title": ""
}
] |
[
{
"docid": "322161b4a43b56e4770d239fe4d2c4c0",
"text": "Graph pattern matching has become a routine process in emerging applications such as social networks. In practice a data graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches from scratch via batch algorithms when the graph is updated. With this comes the need for incremental algorithms that compute changes to the matches in response to updates, to minimize unnecessary recomputation. This paper investigates incremental algorithms for graph pattern matching defined in terms of graph simulation, bounded simulation and subgraph isomorphism. (1) For simulation, we provide incremental algorithms for unit updates and certain graph patterns. These algorithms are optimal: in linear time in the size of the changes in the input and output, which characterizes the cost that is inherent to the problem itself. For general patterns we show that the incremental matching problem is unbounded, i.e., its cost is not determined by the size of the changes alone. (2) For bounded simulation, we show that the problem is unbounded even for unit updates and path patterns. (3) For subgraph isomorphism, we show that the problem is intractable and unbounded for unit updates and path patterns. (4) For multiple updates, we develop an incremental algorithm for each of simulation, bounded simulation and subgraph isomorphism. We experimentally verify that these incremental algorithms significantly outperform their batch counterparts in response to small changes, using real-life data and synthetic data.",
"title": ""
},
{
"docid": "1561ef2d0c846e8faa765aae2a7ad922",
"text": "We propose a novel monocular visual inertial odometry algorithm that combines the advantages of EKF-based approaches with those of direct photometric error minimization methods. The method is based on sparse, very small patches and incorporates the minimization of photometric error directly into the EKF measurement model so that inertial data and vision-based surface measurements are used simultaneously during camera pose estimation. We fuse vision-based and inertial measurements almost at the raw-sensor level, allowing the estimated system state to constrain and guide image-space measurements. Our formulation allows for an efficient implementation that runs in real-time on a standard CPU and has several appealing and unique characteristics such as being robust to fast camera motion, in particular rotation, and not depending on the presence of corner-like features in the scene. We experimentally demonstrate robust and accurate performance compared to ground truth and show that our method works on scenes containing only non-intersecting lines.",
"title": ""
},
{
"docid": "be1ac1b39ed75cb2ae2739ea1a443821",
"text": "In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n) space and the other runs with O(∆) time delay and in O(n + m) space, where ∆ denotes the maximum degree of G, M(n) denotes the time needed to multiply two n×n matrices, and the latter one requires O(nm) time as a preprocessing. For a given bipartite graph G, we propose three algorithms for enumerating all maximal bipartite cliques. The first algorithm runs with O(M(n)) time delay and in O(n) space, which immediately follows from the algorithm for the nonbipartite case. The second one runs with O(∆) time delay and in O(n + m) space, and the last one runs with O(∆) time delay and in O(n + m + N∆) space, where N denotes the number of all maximal bipartite cliques in G and both algorithms require O(nm) time as a preprocessing. Our algorithms improve upon all the existing algorithms, when G is either dense or sparse. Furthermore, computational experiments show that our algorithms for sparse graphs have significantly good performance for graphs which are generated randomly and appear in real-world problems.",
"title": ""
},
{
"docid": "ad48ba2fa5ab113fbdf5d9c148f9596d",
"text": "BACKGROUND\nThe Prophylactic hypOthermia to Lessen trAumatic bRain injury-Randomised Controlled Trial (POLAR-RCT) will evaluate whether early and sustained prophylactic hypothermia delivered to patients with severe traumatic brain injury improves patient-centred outcomes.\n\n\nMETHODS\nThe POLAR-RCT is a multicentre, randomised, parallel group, phase III trial of early, prophylactic cooling in critically ill patients with severe traumatic brain injury, conducted in Australia, New Zealand, France, Switzerland, Saudi Arabia and Qatar. A total of 511 patients aged 18-60 years have been enrolled with severe acute traumatic brain injury. The trial intervention of early and sustained prophylactic hypothermia to 33 °C for 72 h will be compared to standard normothermia maintained at a core temperature of 37 °C. The primary outcome is the proportion of favourable neurological outcomes, comprising good recovery or moderate disability, observed at six months following randomisation utilising a midpoint dichotomisation of the Extended Glasgow Outcome Scale (GOSE). Secondary outcomes, also assessed at six months following randomisation, include the probability of an equal or greater GOSE level, mortality, the proportions of patients with haemorrhage or infection, as well as assessment of quality of life and health economic outcomes. The planned sample size will allow 80% power to detect a 30% relative risk increase from 50% to 65% (equivalent to a 15% absolute risk increase) in favourable neurological outcome at a two-sided alpha of 0.05.\n\n\nDISCUSSION\nConsistent with international guidelines, a detailed and prospective analysis plan has been developed for the POLAR-RCT. This plan specifies the statistical models for evaluation of primary and secondary outcomes, as well as defining covariates for adjusted analyses and methods for exploratory analyses. Application of this statistical analysis plan to the forthcoming POLAR-RCT trial will facilitate unbiased analyses of these important clinical data.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov, NCT00987688 (first posted 1 October 2009); Australian New Zealand Clinical Trials Registry, ACTRN12609000764235 . Registered on 3 September 2009.",
"title": ""
},
{
"docid": "4467f4fc7e9f1199ca6b57f7818ca42c",
"text": "Banking in several developing countries has transcended from a traditional brick-and mortar model of customers queuing for services in the banks to modern day banking where banks can be reached at any point for their services. This can be attributed to the tremendous growth in mobile penetration in many countries across the globe including Jordan. The current exploratory study is an attempt to identify the underlying factors that affects mobile banking adoption in Jordan. Data for this study have been collected using a questionnaire containing 22 questions. Out of 450 questionnaires that have been distributed, 301 are returned (66.0%). In the survey, factors that may affect Jordanian mobile phone users' to adopt mobile banking services were examined. The research findings suggested that all the six factors; self efficacy, trailability, compatibility, complexity, risk and relative advantage were statistically significant in influencing mobile banking adoption.",
"title": ""
},
{
"docid": "3f807cb7e753ebd70558a0ce74b416b7",
"text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a33e8a616955971014ceea9da1e8fcbe",
"text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.",
"title": ""
},
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
},
{
"docid": "cf341e272dcc4773829f09e36a0519b3",
"text": "Malicious Web sites are a cornerstone of Internet criminal activities. The dangers of these sites have created a demand for safeguards that protect end-users from visiting them. This article explores how to detect malicious Web sites from the lexical and host-based features of their URLs. We show that this problem lends itself naturally to modern algorithms for online learning. Online algorithms not only process large numbers of URLs more efficiently than batch algorithms, they also adapt more quickly to new features in the continuously evolving distribution of malicious URLs. We develop a real-time system for gathering URL features and pair it with a real-time feed of labeled URLs from a large Web mail provider. From these features and labels, we are able to train an online classifier that detects malicious Web sites with 99% accuracy over a balanced dataset.",
"title": ""
},
{
"docid": "8588a3317d4b594d8e19cb005c3d35c7",
"text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.",
"title": ""
},
{
"docid": "955201c5191774ca14ea38e473bd7d04",
"text": "We advocate a relation based approach to Argumentation Mining. Our focus lies on the extraction of argumentative relations instead of the identification of arguments, themselves. By classifying pairs of sentences according to the relation that holds between them we are able to identify sentences that may be factual when considered in isolation, but carry argumentative meaning when read in context. We describe scenarios in which this is useful, as well as a corpus of annotated sentence pairs we are developing to provide a testbed for this approach.",
"title": ""
},
{
"docid": "c0c30c3b9539511e9079ec7894ad754f",
"text": "Cardiovascular disease remains the world's leading cause of death. Yet, we have known for decades that the vast majority of atherosclerosis and its subsequent morbidity and mortality are influenced predominantly by diet. This paper will describe a health-promoting whole food, plant-based diet; delineate macro- and micro-nutrition, emphasizing specific geriatric concerns; and offer guidance to physicians and other healthcare practitioners to support patients in successfully utilizing nutrition to improve their health.",
"title": ""
},
{
"docid": "f05d7f391d6d805308801d23bc3234f0",
"text": "Identifying patterns in large high dimensional data sets is a challenge. As the number of dimensions increases, the patterns in the data sets tend to be more prominent in the subspaces than the original dimensional space. A system to facilitate presentation of such subspace oriented patterns in high dimensional data sets is required to understand the data.\n Heidi is a high dimensional data visualization system that captures and visualizes the closeness of points across various subspaces of the dimensions; thus, helping to understand the data. The core concept behind Heidi is based on prominence of patterns within the nearest neighbor relations between pairs of points across the subspaces.\n Given a d-dimensional data set as input, Heidi system generates a 2-D matrix represented as a color image. This representation gives insight into (i) how the clusters are placed with respect to each other, (ii) characteristics of placement of points within a cluster in all the subspaces and (iii) characteristics of overlapping clusters in various subspaces.\n A sample of results displayed and discussed in this paper illustrate how Heidi Visualization can be interpreted.",
"title": ""
},
{
"docid": "8ca55e6a146406634335ccc1914a09d2",
"text": "In this paper we present the results of a simulation study to explore the ability of Bayesian parametric and nonparametric models to provide an adequate fit to count data, of the type that would routinely be analyzed parametrically either through fixed-effects or random-effects Poisson models. The context of the study is a randomized controlled trial with two groups (treatment and control). Our nonparametric approach utilizes several modeling formulations based on Dirichlet process priors. We find that the nonparametric models are able to flexibly adapt to the data, to offer rich posterior inference, and to provide, in a variety of settings, more accurate predictive inference than parametric models.",
"title": ""
},
{
"docid": "3bf5eaa6400ae63000a1d100114fe8fd",
"text": "In Fig. 4e of this Article, the labels for ‘Control’ and ‘HFD’ were reversed (‘Control’ should have been labelled blue rather than purple, and ‘HFD’ should have been labelled purple rather than blue). Similarly, in Fig. 4f of this Article, the labels for ‘V’ and ‘GW’ were reversed (‘V’ should have been labelled blue rather than purple, and ‘GW’ should have been labelled purple instead of blue). The original figure has been corrected online.",
"title": ""
},
{
"docid": "f309d2f237f4451bea75767f53277143",
"text": "Most problems in computational geometry are algebraic. A general approach to address nonrobustness in such problems is Exact Geometric Computation (EGC). There are now general libraries that support EGC for the general programmer (e.g., Core Library, LEDA Real). Many applications require non-algebraic functions as well. In this paper, we describe how to provide non-algebraic functions in the context of other EGC capabilities. We implemented a multiprecision hypergeometric series package which can be used to evaluate common elementary math functions to an arbitrary precision. This can be achieved relatively easily using the Core Library which supports a guaranteed precision level of accuracy. We address several issues of efficiency in such a hypergeometric package: automatic error analysis, argument reduction, preprocessing of hypergeometric parameters, and precomputed constants. Some preliminary experimental results are reported.",
"title": ""
},
{
"docid": "cbad7caa1cc1362e8cd26034617c39f4",
"text": "Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads. In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an \"evaluation process\" that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.",
"title": ""
},
{
"docid": "417ba025ea47d354b8e087d37ddb3655",
"text": "User satisfaction in computer games seems to be influenced by game balance, the level of challenge faced by the user. This work presents an evaluation, performed by human players, of dynamic game balancing approaches. The results indicate that adaptive approaches are more effective. This paper also enumerates some issues encountered in evaluating users’ satisfaction, in the context of games, and depicts some learned lessons.",
"title": ""
},
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
},
{
"docid": "231554e78d509e7bca2dfd4280b411bb",
"text": "Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.",
"title": ""
}
] |
scidocsrr
|
a5aed03c53584ff0d80bae4c3c78edb3
|
SenSprout: inkjet-printed soil moisture and leaf wetness sensor
|
[
{
"docid": "3c55948ba5466b04c7b3c1005d4f749f",
"text": "Energy harvesting is a key technique that can be used to overcome the barriers that prevent the real world deployment of wireless sensor networks (WSNs). In particular, solar energy harvesting has been commonly used to overcome this barrier. However, it should be noted that WSNs operating on solar power suffer form energy shortage during nighttimes. Therefore, to solve this problem, we exploit the use of TV broadcasts airwaves as energy sources to power wireless sensor nodes. We measured the output of a rectenna continuously for 7 days; from the results of this measurement, we showed that Radio Frequency (RF) energy can always be harvested. We developed an RF energy harvesting WSN prototype to show the effectiveness of RF energy harvesting for the usage of a WSN. We also proposed a duty cycle determination method for our system, and verified the validity of this method by implementing our system. This RF energy harvesting method is effective in a long period measurement application that do not require high power consumption.",
"title": ""
}
] |
[
{
"docid": "5b6daefbefd44eea4e317e673ad91da3",
"text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.",
"title": ""
},
{
"docid": "7dd86bc341e2637505387a96c16ea9c8",
"text": "This paper focuses on the relationship between fine art movements in the 20th C and the pioneers of digital art from 1956 to 1986. The research is part of a project called Digital Art Museum, which is an electronic archive devoted to the history and practice of computer art, and is also active in curating exhibitions of the work. While computer art genres never became mainstream art movements, there are clear areas of common interest, even when these are separated by some decades.",
"title": ""
},
{
"docid": "be1ac1b39ed75cb2ae2739ea1a443821",
"text": "In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n) space and the other runs with O(∆) time delay and in O(n + m) space, where ∆ denotes the maximum degree of G, M(n) denotes the time needed to multiply two n×n matrices, and the latter one requires O(nm) time as a preprocessing. For a given bipartite graph G, we propose three algorithms for enumerating all maximal bipartite cliques. The first algorithm runs with O(M(n)) time delay and in O(n) space, which immediately follows from the algorithm for the nonbipartite case. The second one runs with O(∆) time delay and in O(n + m) space, and the last one runs with O(∆) time delay and in O(n + m + N∆) space, where N denotes the number of all maximal bipartite cliques in G and both algorithms require O(nm) time as a preprocessing. Our algorithms improve upon all the existing algorithms, when G is either dense or sparse. Furthermore, computational experiments show that our algorithms for sparse graphs have significantly good performance for graphs which are generated randomly and appear in real-world problems.",
"title": ""
},
{
"docid": "0642dd233fb6f25159eb0f7d030a1764",
"text": "Integrating games into the computer science curriculum has been gaining acceptance in recent years, particularly when used to improve student engagement in introductory courses. This paper argues that games can also be useful in upper level courses, such as general artificial intelligence and machine learning. We provide a case study of using a Mario game in a machine learning class to provide one successful data point where both content-specific and general learning outcomes were successfully achieved.",
"title": ""
},
{
"docid": "2936f8e1f9a6dcf2ba4fdbaee73684e2",
"text": "Recently the world of the web has become more social and more real-time. Facebook and Twitter are perhaps the exemplars of a new generation of social, real-time web services and we believe these types of service provide a fertile ground for recommender systems research. In this paper we focus on one of the key features of the social web, namely the creation of relationships between users. Like recent research, we view this as an important recommendation problem -- for a given user, UT which other users might be recommended as followers/followees -- but unlike other researchers we attempt to harness the real-time web as the basis for profiling and recommendation. To this end we evaluate a range of different profiling and recommendation strategies, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.",
"title": ""
},
{
"docid": "56fa6f96657182ff527e42655bbd0863",
"text": "Nootropics or smart drugs are well-known compounds or supplements that enhance the cognitive performance. They work by increasing the mental function such as memory, creativity, motivation, and attention. Recent researches were focused on establishing a new potential nootropic derived from synthetic and natural products. The influence of nootropic in the brain has been studied widely. The nootropic affects the brain performances through number of mechanisms or pathways, for example, dopaminergic pathway. Previous researches have reported the influence of nootropics on treating memory disorders, such as Alzheimer's, Parkinson's, and Huntington's diseases. Those disorders are observed to impair the same pathways of the nootropics. Thus, recent established nootropics are designed sensitively and effectively towards the pathways. Natural nootropics such as Ginkgo biloba have been widely studied to support the beneficial effects of the compounds. Present review is concentrated on the main pathways, namely, dopaminergic and cholinergic system, and the involvement of amyloid precursor protein and secondary messenger in improving the cognitive performance.",
"title": ""
},
{
"docid": "cb929b640f8ee7b550512dd4d0dc8e17",
"text": "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.",
"title": ""
},
{
"docid": "d3c7900e22ab8d4dd52fa12f47fbba09",
"text": "In this paper, an obstacle-surmounting-enabled lower limb exoskeleton with novel linkage joints that perfectly mimicked human motions was proposed. Currently, most lower exoskeletons that use linear actuators have a direct connection between the wearer and the controlled part. Compared to the existing joints, the novel linkage joint not only fitted better into compact chasis, but also provided greater torque when the joint was at a large bend angle. As a result, it extended the angle range of joint peak torque output. With any given power, torque was prioritized over rotational speed, because instead of rotational speed, sufficiency of torque is the premise for most joint actions. With insufficient torque, the exoskeleton will be a burden instead of enhancement to its wearer. With optimized distribution of torque among the joints, the novel linkage method may contribute to easier exoskeleton movements.",
"title": ""
},
{
"docid": "5a077d1d4d6c212b7f817cc115bf31bd",
"text": "Focus group interviews are widely used in health research to explore phenomena and are accepted as a legitimate qualitative methodology. They are used to draw out interaction data from discussions among participants; researchers running these groups need to be skilled in interviewing and in managing groups, group dynamics and group discussions. This article follows Doody et al's (2013) article on the theory of focus group research; it addresses the preparation for focus groups relating to the research environment, interview process, duration, participation of group members and the role of the moderator. The article aims to assist researchers to prepare and plan for focus groups and to develop an understanding of them, so information from the groups can be used for academic studies or as part of a research proposal.",
"title": ""
},
{
"docid": "2b53e3494d58b2208f95d5bb67589677",
"text": "In his paper ‘Logic and conversation’ Grice (1989: 37) introduced a distinction between generalized and particularized conversational implicatures. His notion of a generalized conversational implicature (GCI) has been developed in two competing directions, by neo-Griceans such as Horn (1989) and Levinson (1983, 1987b, 1995, 2000) on the one hand, and relevance theorists such as Sperber & Wilson (1986) and Carston (1988, 1993, 1995, 1997, 1998a,b) on the other. Levinson defends the claim that GCIs are inferred on the basis of a set of default heuristics that are triggered by the presence of certain sorts of lexical items. These default inferences will be drawn unless something unusual in the context blocks them. Carston reconceives GCIs as contents that a speaker directly communicates, rather than as contents that are merely conversationally implicated. GCIs are treated as pragmatic developments of semantically underspecified logical forms. They are not the products of default inferences, since what is communicated depends heavily on the specific context, and not merely on the presence or absence of certain lexical items. We introduce two processing models, the Default Model and the Underspecified Model, that are inspired by these rival theoretical views. This paper describes an eye monitoring experiment that is intended to test the predictions of these two models. Our primary concern is to make a case for the claim that it is fruitful to apply an eye tracking methodology to an area of pragmatic research that has not previously been explored from a processing perspective.",
"title": ""
},
{
"docid": "9bea0e85c3de06ef440c255700b041fd",
"text": "Preterm birth and infants’ admission to neonatal intensive care units (NICU) are associated with significant emotional and psychological stresses on mothers that interfere with normal mother-infant relationship. Maternal selfefficacy in parenting ability may predict long-term outcome of mother-infant relationship as well as neurodevelopmental and behavioral development of preterm infants. The Perceived Maternal Parenting Self-Efficacy (PMP S-E) tool was developed to measure self-efficacy in mothers of premature infants in the United Kingdom. The present study determined if maternal and neonatal characteristics could predict PMP S-E scores of mothers who were administered to in a mid-west community medical center NICU. Mothers whose infants were born less than 37 weeks gestational age and admitted to a level III neonatal intensive care unit participated. Participants completed the PMP S-E and demographic survey prior to discharge. A logistic regression analysis was conducted from PMP SE scores involving 103 dyads using maternal education, race, breast feeding, maternal age, infant’s gestational age, Apgar 5-minute score, birth weight, mode of delivery and time from birth to completion of PMP S-E questionnaire. Time to completion of survey and gestational age were the significant predictors of PMP S-E scores. The finding of this study concerning the utilization of the PMP S-E in a United States mid-west tertiary neonatal center suggest that interpretation of the score requires careful consideration of these two variables.",
"title": ""
},
{
"docid": "7ec6540b44b23a0380dcb848239ccac4",
"text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.",
"title": ""
},
{
"docid": "26f2e3918eb624ce346673d10b5d2eb7",
"text": "We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic image captioning which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receivers ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a critic of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension model in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.",
"title": ""
},
{
"docid": "8481bf05a0afc1de516d951474fb9d92",
"text": "We propose an approach to Multitask Learning (MTL) to make deep learning models faster and lighter for applications in which multiple tasks need to be solved simultaneously, which is particularly useful in embedded, real-time systems. We develop a multitask model for both Object Detection and Semantic Segmentation and analyze the challenges that appear during its training. Our multitask network is 1.6x faster, lighter and uses less memory than deploying the single-task models in parallel. We conclude that MTL has the potential to give superior performance in exchange of a more complex training process that introduces challenges not present in single-task models.",
"title": ""
},
{
"docid": "36b6eb29650479d45b8b0479d6fc0371",
"text": "Cognizant of the research gap in the theorization of mobile learning, this paper conceptually explores how the theories and methodology of self-regulated learning (SRL), an active area in contemporary educational psychology, are inherently suited to address the issues originating from the defining characteristics of mobile learning: enabling student-centred, personal, and ubiquitous learning. These characteristics provide some of the conditions for learners to learn anywhere and anytime, and thus, entail learners to be motivated and to be able to self-regulate their own learning. We propose an analytic SRL model of mobile learning as a conceptual framework for understanding mobile learning, in which the notion of self-regulation as agency is at the core. The rationale behind this model is built on our recognition of the challenges in the current conceptualization of the mechanisms and processes of mobile learning, and the inherent relationship between mobile learning and SRL. We draw on work in a 3-year research project in developing and implementing a mobile learning environment in elementary science classes in Singapore to illustrate the application of SRL theories and methodology to understand and analyse mobile learning.",
"title": ""
},
{
"docid": "90cfe22d4e436e9caa61a2ac198cb7f7",
"text": "Deep Neural Networks (DNNs) are fast becoming ubiquitous for their ability to attain good accuracy in various machine learning tasks. A DNN’s architecture (i.e., its hyper-parameters) broadly determines the DNN’s accuracy and performance, and is often confidential. Attacking a DNN in the cloud to obtain its architecture can potentially provide major commercial value. Further, attaining a DNN’s architecture facilitates other, existing DNN attacks. This paper presents Cache Telepathy: a fast and accurate mechanism to steal a DNN’s architecture using the cache side channel. Our attack is based on the insight that DNN inference relies heavily on tiled GEMM (Generalized Matrix Multiply), and that DNN architecture parameters determine the number of GEMM calls and the dimensions of the matrices used in the GEMM functions. Such information can be leaked through the cache side channel. This paper uses Prime+Probe and Flush+Reload to attack VGG and ResNet DNNs running OpenBLAS and Intel MKL libraries. Our attack is effective in helping obtain the architectures by very substantially reducing the search space of target DNN architectures. For example, for VGG using OpenBLAS, it reduces the search space from more than 1035 architectures to just 16.",
"title": ""
},
{
"docid": "7bea83a1ed940aa68bc67b5d046cf015",
"text": "Natural languages are full of collocations, recurrent combinations of words that co-occur more often than expected by chance and that correspond to arbitrary word usages. Recent work in lexicography indicates that collocations are pervasive in English; apparently, they are common in all types of writing, including both technical and nontechnical genres. Several approaches have been proposed to retrieve various types of collocations from the analysis of large samples of textual data. These techniques automatically produce large numbers of collocations along with statistical figures intended to reflect the relevance of the associations. However, noue of these techniques provides functional information along with the collocation. Also, the results produced often contained improper word associations reflecting some spurious aspect of the training corpus that did not stand for true collocations. In this paper, we describe a set of techniques based on statistical methods for retrieving and identifying collocations from large textual corpora. These techniques produce a wide range of collocations and are based on some original filtering methods that allow the production of richer and higher-precision output. These techniques have been implemented and resulted in a lexicographic tool, Xtract. The techniques are described and some results are presented on a 10 million-word corpus of stock market news reports. A lexicographic evaluation of Xtract as a collocation retrieval tool has been made, and the estimated precision of Xtract is 80%.",
"title": ""
},
{
"docid": "4d9c77845346d310d5b262e75d9cedba",
"text": "Distributed database technology is expected to have a significant impact on data processing in the upcoming years. Today’s business environment has an increasing need for distributed database and Client/server applications as the desire for consistent, scalable, reliable and accessible information is steadily growing. Distributed processing is an effective way to improve reliability and performance of a database system. Distribution of data is a collection of fragmentation, allocation and replication processes. Previous research works provided fragmentation solution based on empirical data about the type and frequency of the queries submitted to a centralized system. These solutions are not suitable at the initial stage of a database design for a distributed system. The purpose of this work is to present an introduction to Distributed Databases which are becoming very popular now days with the description of distributed database environment, fragmentation and horizontal fragmentation technique. Horizontal fragmentation has an important impact in improving the applications performance that is strongly affected by distributed databases design phase. In this report, we have presented a fragmentation technique that can be applied at the initial stage as well as in later stages of a distributed database system for partitioning the relations. Allocation of fragments is done simultaneously in the algorithm. Result shows that proposed technique can solve initial fragmentation problem of relational databases for distributed systems properly.",
"title": ""
},
{
"docid": "ed47a1a6c193b6c3699805f5be641555",
"text": "Wind power generation differs from conventional thermal generation due to the stochastic nature of wind. Thus wind power forecasting plays a key role in dealing with the challenges of balancing supply and demand in any electricity system, given the uncertainty associated with the wind farm power output. Accurate wind power forecasting reduces the need for additional balancing energy and reserve power to integrate wind power. Wind power forecasting tools enable better dispatch, scheduling and unit commitment of thermal generators, hydro plant and energy storage plant and more competitive market trading as wind power ramps up and down on the grid. This paper presents an in-depth review of the current methods and advances in wind power forecasting and prediction. Firstly, numerical wind prediction methods from global to local scales, ensemble forecasting, upscaling and downscaling processes are discussed. Next the statistical and machine learning approach methods are detailed. Then the techniques used for benchmarking and uncertainty analysis of forecasts are overviewed, and the performance of various approaches over different forecast time horizons is examined. Finally, current research activities, challenges and potential future developments are appraised. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
9f3f6a7f77273a5f2de21be1d5f5ae3d
|
Smart Grid Cybersecurity: Standards and Technical Countermeasures
|
[
{
"docid": "8d21369604ad890704d535785c8e3171",
"text": "With the integration of advanced computing and communication technologies, smart grid is considered as the next-generation power system, which promises self healing, resilience, sustainability, and efficiency to the energy critical infrastructure. The smart grid innovation brings enormous challenges and initiatives across both industry and academia, in which the security issue emerges to be a critical concern. In this paper, we present a survey of recent security advances in smart grid, by a data driven approach. Compared with existing related works, our survey is centered around the security vulnerabilities and solutions within the entire lifecycle of smart grid data, which are systematically decomposed into four sequential stages: 1) data generation; 2) data acquisition; 3) data storage; and 4) data processing. Moreover, we further review the security analytics in smart grid, which employs data analytics to ensure smart grid security. Finally, an effort to shed light on potential future research concludes this paper.",
"title": ""
}
] |
[
{
"docid": "081e474c622f122832490a54657e5051",
"text": "To defend a network from intrusion is a generic problem of all time. It is important to develop a defense mechanism to secure the network from anomalous activities. This paper presents a comprehensive survey of methods and systems introduced by researchers in the past two decades to protect network resources from intrusion. A detailed pros and cons analysis of these methods and systems is also reported in this paper. Further, this paper also provides a list of issues and research challenges in this evolving field of research. We believe that, this knowledge will help to create a defense system.",
"title": ""
},
{
"docid": "8b6d5e7526e58ce66cf897d17b094a91",
"text": "Regression testing is an expensive maintenance process used to revalidate modified software. Regression test selection (RTS) techniques try to lower the cost of regression testing by selecting and running a subset of the existing test cases. Many such techniques have been proposed and initial studies show that they can produce savings. We believe, however, that issues such as the frequency with which testing is done have a strong effect on the behavior of these techniques. Therefore, we conducted an experiment to assess the effects of test application frequency on the costs and benefits of regression test selection techniques. Our results expose essential tradeoffs that should be considered when using these techniques over a series of software releases.",
"title": ""
},
{
"docid": "5491dd183e386ada396b237a41d907aa",
"text": "The technique of scale multiplication is analyzed in the framework of Canny edge detection. A scale multiplication function is defined as the product of the responses of the detection filter at two scales. Edge maps are constructed as the local maxima by thresholding the scale multiplication results. The detection and localization criteria of the scale multiplication are derived. At a small loss in the detection criterion, the localization criterion can be much improved by scale multiplication. The product of the two criteria for scale multiplication is greater than that for a single scale, which leads to better edge detection performance. Experimental results are presented.",
"title": ""
},
{
"docid": "046f2b6ec65903d092f8576cd210d7ee",
"text": "Aim\nThe principal study objective was to investigate the pharmacokinetic characteristics and determine the absolute bioavailability and tolerability of a new sublingual (SL) buprenorphine wafer.\n\n\nMethods\nThe study was of open label, two-way randomized crossover design in 14 fasted healthy male and female volunteers. Each participant, under naltrexone block, received either a single intravenous dose of 300 mcg of buprenorphine as a constant infusion over five minutes or a sublingual dose of 800 mcg of buprenorphine in two treatment periods separated by a seven-day washout period. Blood sampling for plasma drug assay was taken on 16 occasions throughout a 48-hour period (predose and at 10, 20, 30, and 45 minutes, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12, 24 and 48 hours postdose). The pharmacokinetic parameters were determined by noncompartmental analyses of the buprenorphine plasma concentration-time profiles. Local tolerability was assessed using modified Likert scales.\n\n\nResults\nThe absolute bioavailability of SL buprenorphine was 45.4% (95% confidence interval = 37.8-54.3%). The median times to peak plasma concentration were 10 minutes and 60 minutes after IV and SL administration, respectively. The peak plasma concentration was 2.65 ng/mL and 0.74 ng/mL after IV and SL administration, respectively. The half-lives were 9.1 hours and 11.2 hours after IV and SL administration, respectively. The wafer had very good local tolerability.\n\n\nConclusions\nThis novel sublingual buprenorphine wafer has high bioavailability and reduced Tmax compared with other SL tablet formulations of buprenorphine. The wafer displayed very good local tolerability. The results suggest that this novel buprenorphine wafer may provide enhanced clinical utility in the management of both acute and chronic pain.\n\n\nBackground\nBuprenorphine is approved for use in pain management and opioid addiction. Sublingual administration of buprenorphine is a simple and noninvasive route of administration and has been available for many years. Improved sublingual formulations may lead to increased utilization of this useful drug for acute and chronic pain management.",
"title": ""
},
{
"docid": "d32bdf27607455fb3416a4e3e3492f01",
"text": "Photo-editing software restricts the control of objects in a photograph to the 2D image plane. We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph. As 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object. The completion process leverages the structure and symmetry in the stock 3D model to factor out the effects of illumination, and to complete the appearance of the object. We demonstrate our system by producing object manipulations that would be impossible in traditional 2D photo-editing programs, such as turning a car over, making a paper-crane flap its wings, or manipulating airplanes in a historical photograph to change its story.",
"title": ""
},
{
"docid": "bf8000b2119a5107041abf09762668ab",
"text": "With the popularity of social media, people are more and more interested in mining opinions from it. Learning from social media not only has value for research, but also good for business use. RepLab 2012 had Profiling task and Monitoring task to understand the company related tweets. Profiling task aims to determine the Ambiguity and Polarity for tweets. In order to determine this Ambiguity and Polarity for the tweets in RepLab 2012 Profiling task, we built Google Adwords Filter for Ambiguity and several approaches like SentiWordNet, Happiness Score and Machine Learning for Polarity. We achieved good performance in the training set, and the performance in test set is also acceptable.",
"title": ""
},
{
"docid": "8f6682ddcc435c95ae3ef35ebb84de7f",
"text": "A series of 59 patients was treated and operated on for pain felt over the area of the ischial tuberosity and radiating down the back of the thigh. This condition was labeled as the \"hamstring syndrome.\" Pain was typically incurred by assuming a sitting position, stretching the affected posterior thigh, and running fast. The patients usually had a history of recurrent hamstring \"tears.\" Their symptoms were caused by the tight, tendinous structures of the lateral insertion area of the hamstring muscles to the ischial tuberosity. Upon division of these structures, complete relief was obtained in 52 of the 59 patients.",
"title": ""
},
{
"docid": "9bd08edae8ab7b20aab40e24f6bdf968",
"text": "Personalized Web browsing and search hope to provide Web information that matches a user’s personal interests and thus provide more effective and efficient information access. A key feature in developing successful personalized Web applications is to build user profiles that accurately represent a user’ s interests. The main goal of this research is to investigate techniques that implicitly build ontology-based user profiles. We build the profiles without user interaction, automatically monitoring the user’s browsing habits. After building the initial profile from visited Web pages, we investigate techniques to improve the accuracy of the user profile. In particular, we focus on how quickly we can achieve profile stability, how to identify the most important concepts, the effect of depth in the concept-hierarchy on the importance of a concept, and how many levels from the hierarchy should be used to represent the user. Our major findings are that ranking the concepts in the profiles by number of documents assigned to them rather than by accumulated weights provides better profile accuracy. We are also able to identify stable concepts in the profile, thus allowing us to detect long-term user interests. We found that the accuracy of concept detection decreases as we descend them in the concept hierarchy, however this loss of accuracy must be balanced against the detailed view of the user available only through the inclusion of lower-level concepts.",
"title": ""
},
{
"docid": "90d5aca626d61806c2af3cc551b28c90",
"text": "This paper presents two novel approaches to increase performance bounds of image steganography under the criteria of minimizing distortion. First, in order to efficiently use the images’ capacities, we propose using parallel images in the embedding stage. The result is then used to prove sub-optimality of the message distribution technique used by all cost based algorithms including HUGO, S-UNIWARD, and HILL. Second, a new distribution approach is presented to further improve the security of these algorithms. Experiments show that this distribution method avoids embedding in smooth regions and thus achieves a better performance, measured by state-of-the-art steganalysis, when compared with the current used distribution.",
"title": ""
},
{
"docid": "cdf2235bea299131929700406792452c",
"text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"title": ""
},
{
"docid": "95be4f5132cde3c637c5ee217b5c8405",
"text": "In recent years, information communication and computation technologies are deeply converging, and various wireless access technologies have been successful in deployment. It can be predicted that the upcoming fifthgeneration mobile communication technology (5G) can no longer be defined by a single business model or a typical technical characteristic. 5G is a multi-service and multitechnology integrated network, meeting the future needs of a wide range of big data and the rapid development of numerous businesses, and enhancing the user experience by providing smart and customized services. In this paper, we propose a cloud-based wireless network architecture with four components, i.e., mobile cloud, cloud-based radio access network (Cloud RAN), reconfigurable network and big data centre, which is capable of providing a virtualized, reconfigurable, smart wireless network.",
"title": ""
},
{
"docid": "db26d71ec62388e5367eb0f2bb45ad40",
"text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th",
"title": ""
},
{
"docid": "ce8f000fa9a9ec51b8b2b63e98cec5fb",
"text": "The Berlin Brain-Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are 1) the use of well-established motor competences as control paradigms, 2) high-dimensional features from 128-channel electroencephalogram (EEG), and 3) advanced machine learning techniques. As reported earlier, our experiments demonstrate that very high information transfer rates can be achieved using the readiness potential (RP) when predicting the laterality of upcoming left- versus right-hand movements in healthy subjects. A more recent study showed that the RP similarly accompanies phantom movements in arm amputees, but the signal strength decreases with longer loss of the limb. In a complementary approach, oscillatory features are used to discriminate imagined movements (left hand versus right hand versus foot). In a recent feedback study with six healthy subjects with no or very little experience with BCI control, three subjects achieved an information transfer rate above 35 bits per minute (bpm), and further two subjects above 24 and 15 bpm, while one subject could not achieve any BCI control. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials even when compared to results with very well-trained subjects operating other BCI systems.",
"title": ""
},
{
"docid": "36b4097c3c394352dc2b7ac25ff4948f",
"text": "An important task of opinion mining is to extract people’s opinions on features of an entity. For example, the sentence, “I love the GPS function of Motorola Droid” expresses a positive opinion on the “GPS function” of the Motorola phone. “GPS function” is the feature. This paper focuses on mining features. Double propagation is a state-of-the-art technique for solving the problem. It works well for medium-size corpora. However, for large and small corpora, it can result in low precision and low recall. To deal with these two problems, two improvements based on part-whole and “no” patterns are introduced to increase the recall. Then feature ranking is applied to the extracted feature candidates to improve the precision of the top-ranked candidates. We rank feature candidates by feature importance which is determined by two factors: feature relevance and feature frequency. The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high. Experiments on diverse real-life datasets show promising results.",
"title": ""
},
{
"docid": "268e434cedbf5439612b2197be73a521",
"text": "We have recently developed a chaotic gas turbine whose rotational motion might simulate turbulent Rayleigh-Bénard convection. The nondimensionalized equations of motion of our turbine are expressed as a star network of N Lorenz subsystems, referred to as augmented Lorenz equations. Here, we propose an application of the augmented Lorenz equations to chaotic cryptography, as a type of symmetric secret-key cryptographic method, wherein message encryption is performed by superimposing the chaotic signal generated from the equations on a plaintext in much the same way as in one-time pad cryptography. The ciphertext is decrypted by unmasking the chaotic signal precisely reproduced with a secret key consisting of 2N-1 (e.g., N=101) real numbers that specify the augmented Lorenz equations. The transmitter and receiver are assumed to be connected via both a quantum communication channel on which the secret key is distributed using a quantum key distribution protocol and a classical data communication channel on which the ciphertext is transmitted. We discuss the security and feasibility of our cryptographic method.",
"title": ""
},
{
"docid": "62ff5888ad0c8065097603da8ff79cd6",
"text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.",
"title": ""
},
{
"docid": "3a4a875dc1cc491d8a7ce373043b3937",
"text": "In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks. Keywords—Outlier detection, generative adversary networks, semi-supervised learning.",
"title": ""
},
{
"docid": "20b7da7c9f630f12b0ef86d92ed7aa0f",
"text": "In this paper, a Rectangular Dielectric Resonator Antenna (RDRA) with a modified feeding line is designed and investigated at 28GHz. The modified feed line is designed to excite the DR with relative permittivity of 10 which contributes to a wide bandwidth operation. The proposed single RDRA has been fabricated and mounted on a RT/Duroid 5880 (εr = 2.2 and tanδ = 0.0009) substrate. The optimized single element has been applied to array structure to improve the gain and achieve the required gain performance. The radiation pattern, impedance bandwidth and gain are simulated and measured accordingly. The number of elements and element spacing are studied for an optimum performance. The proposed antenna obtains a reflection coefficient response from 27.0GHz to 29.1GHz which cover the desired frequency band. This makes the proposed antenna achieve 2.1GHz impedance bandwidth and gain of 12.1 dB. Thus, it has potential for millimeter wave and 5G applications.",
"title": ""
},
{
"docid": "b01436481aa77ebe7538e760132c5f3c",
"text": "We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.",
"title": ""
},
{
"docid": "34f83c7dde28c720f82581804accfa71",
"text": "The main threats to human health from heavy metals are associated with exposure to lead, cadmium, mercury and arsenic. These metals have been extensively studied and their effects on human health regularly reviewed by international bodies such as the WHO. Heavy metals have been used by humans for thousands of years. Although several adverse health effects of heavy metals have been known for a long time, exposure to heavy metals continues, and is even increasing in some parts of the world, in particular in less developed countries, though emissions have declined in most developed countries over the last 100 years. Cadmium compounds are currently mainly used in re-chargeable nickel-cadmium batteries. Cadmium emissions have increased dramatically during the 20th century, one reason being that cadmium-containing products are rarely re-cycled, but often dumped together with household waste. Cigarette smoking is a major source of cadmium exposure. In non-smokers, food is the most important source of cadmium exposure. Recent data indicate that adverse health effects of cadmium exposure may occur at lower exposure levels than previously anticipated, primarily in the form of kidney damage but possibly also bone effects and fractures. Many individuals in Europe already exceed these exposure levels and the margin is very narrow for large groups. Therefore, measures should be taken to reduce cadmium exposure in the general population in order to minimize the risk of adverse health effects. The general population is primarily exposed to mercury via food, fish being a major source of methyl mercury exposure, and dental amalgam. The general population does not face a significant health risk from methyl mercury, although certain groups with high fish consumption may attain blood levels associated with a low risk of neurological damage to adults. Since there is a risk to the fetus in particular, pregnant women should avoid a high intake of certain fish, such as shark, swordfish and tuna; fish (such as pike, walleye and bass) taken from polluted fresh waters should especially be avoided. There has been a debate on the safety of dental amalgams and claims have been made that mercury from amalgam may cause a variety of diseases. However, there are no studies so far that have been able to show any associations between amalgam fillings and ill health. The general population is exposed to lead from air and food in roughly equal proportions. During the last century, lead emissions to ambient air have caused considerable pollution, mainly due to lead emissions from petrol. Children are particularly susceptible to lead exposure due to high gastrointestinal uptake and the permeable blood-brain barrier. Blood levels in children should be reduced below the levels so far considered acceptable, recent data indicating that there may be neurotoxic effects of lead at lower levels of exposure than previously anticipated. Although lead in petrol has dramatically decreased over the last decades, thereby reducing environmental exposure, phasing out any remaining uses of lead additives in motor fuels should be encouraged. The use of lead-based paints should be abandoned, and lead should not be used in food containers. In particular, the public should be aware of glazed food containers, which may leach lead into food. Exposure to arsenic is mainly via intake of food and drinking water, food being the most important source in most populations. Long-term exposure to arsenic in drinking-water is mainly related to increased risks of skin cancer, but also some other cancers, as well as other skin lesions such as hyperkeratosis and pigmentation changes. Occupational exposure to arsenic, primarily by inhalation, is causally associated with lung cancer. Clear exposure-response relationships and high risks have been observed.",
"title": ""
}
] |
scidocsrr
|
34f46ace4af41969e4e324ca76d8e028
|
Gut brain axis: diet microbiota interactions and implications for modulation of anxiety and depression.
|
[
{
"docid": "d348d178b17d63ae49cfe6fd4e052758",
"text": "BACKGROUND & AIMS\nChanges in gut microbiota have been reported to alter signaling mechanisms, emotional behavior, and visceral nociceptive reflexes in rodents. However, alteration of the intestinal microbiota with antibiotics or probiotics has not been shown to produce these changes in humans. We investigated whether consumption of a fermented milk product with probiotic (FMPP) for 4 weeks by healthy women altered brain intrinsic connectivity or responses to emotional attention tasks.\n\n\nMETHODS\nHealthy women with no gastrointestinal or psychiatric symptoms were randomly assigned to groups given FMPP (n = 12), a nonfermented milk product (n = 11, controls), or no intervention (n = 13) twice daily for 4 weeks. The FMPP contained Bifidobacterium animalis subsp Lactis, Streptococcus thermophiles, Lactobacillus bulgaricus, and Lactococcus lactis subsp Lactis. Participants underwent functional magnetic resonance imaging before and after the intervention to measure brain response to an emotional faces attention task and resting brain activity. Multivariate and region of interest analyses were performed.\n\n\nRESULTS\nFMPP intake was associated with reduced task-related response of a distributed functional network (49% cross-block covariance; P = .004) containing affective, viscerosensory, and somatosensory cortices. Alterations in intrinsic activity of resting brain indicated that ingestion of FMPP was associated with changes in midbrain connectivity, which could explain the observed differences in activity during the task.\n\n\nCONCLUSIONS\nFour-week intake of an FMPP by healthy women affected activity of brain regions that control central processing of emotion and sensation.",
"title": ""
},
{
"docid": "bb008d90a8e5ea4262afc0cf784ccbb8",
"text": "*Correspondence to: Michaël Messaoudi; Email: mmessaoudi@etap-lab.com In a recent clinical study, we demonstrated in the general population that Lactobacillus helveticus R0052 and Bifidobacterium longum R0175 (PF) taken in combination for 30 days decreased the global scores of hospital anxiety and depression scale (HADs), and the global severity index of the Hopkins symptoms checklist (HSCL90), due to the decrease of the sub-scores of somatization, depression and angerhostility spheres. Therefore, oral intake of PF showed beneficial effects on anxiety and depression related behaviors in human volunteers. From there, it is interesting to focus on the role of this probiotic formulation in the subjects with the lowest urinary free cortisol levels at baseline. This addendum presents a secondary analysis of the effects of PF in a subpopulation of 25 subjects with urinary free cortisol (UFC) levels less than 50 ng/ml at baseline, on psychological distress based on the percentage of change of the perceived stress scale (PSs), the HADs and the HSCL-90 scores between baseline and follow-up. The data show that PF improves the same scores as in the general population (the HADs global score, the global severity index of the HSCL-90 and three of its sub-scores, i.e., somatization, depression and anger-hostility), as well as the PSs score and three other subscores of the HSCL-90, i.e., “obsessive compulsive,” “anxiety” and “paranoidideation.” Moreover, in the HSCL-90, Beneficial psychological effects of a probiotic formulation (Lactobacillus helveticus R0052 and Bifidobacterium longum R0175) in healthy human volunteers",
"title": ""
},
{
"docid": "92d271da0c5dff6e130e55168c64d2b0",
"text": "New therapeutic targets for noncognitive reductions in energy intake, absorption, or storage are crucial given the worldwide epidemic of obesity. The gut microbial community (microbiota) is essential for processing dietary polysaccharides. We found that conventionalization of adult germ-free (GF) C57BL/6 mice with a normal microbiota harvested from the distal intestine (cecum) of conventionally raised animals produces a 60% increase in body fat content and insulin resistance within 14 days despite reduced food intake. Studies of GF and conventionalized mice revealed that the microbiota promotes absorption of monosaccharides from the gut lumen, with resulting induction of de novo hepatic lipogenesis. Fasting-induced adipocyte factor (Fiaf), a member of the angiopoietin-like family of proteins, is selectively suppressed in the intestinal epithelium of normal mice by conventionalization. Analysis of GF and conventionalized, normal and Fiaf knockout mice established that Fiaf is a circulating lipoprotein lipase inhibitor and that its suppression is essential for the microbiota-induced deposition of triglycerides in adipocytes. Studies of Rag1-/- animals indicate that these host responses do not require mature lymphocytes. Our findings suggest that the gut microbiota is an important environmental factor that affects energy harvest from the diet and energy storage in the host. Data deposition: The sequences reported in this paper have been deposited in the GenBank database (accession nos. AY 667702--AY 668946).",
"title": ""
}
] |
[
{
"docid": "de8f5656f17151c43e2454aa7b8f929f",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading concrete mathematics a foundation for computer science is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "e465b9a38e7649f541ab9e419103b362",
"text": "Spoken language based intelligent assistants (IAs) have been developed for a number of domains but their functionality has mostly been confined to the scope of a given app. One reason is that it’s is difficult for IAs to infer a user’s intent without access to relevant context and unless explicitly implemented, context is not available across app boundaries. We describe context-aware multi-app dialog systems that can learn to 1) identify meaningful user intents; 2) produce natural language representation for the semantics of such intents; and 3) predict user intent as they engage in multi-app tasks. As part of our work we collected data from the smartphones of 14 users engaged in real-life multi-app tasks. We found that it is reasonable to group tasks into high-level intentions. Based on the dialog content, IA can generate useful phrases to describe the intention. We also found that, with readily available contexts, IAs can effectively predict user’s intents during conversation, with accuracy at 58.9%.",
"title": ""
},
{
"docid": "73ece9a0404ecb0cf59c7c5a1f9586d7",
"text": "BACKGROUND\nAlthough there is abundant evidence to recommend a physically active lifestyle, adult physical activity (PA) levels have declined over the past two decades. In order to understand why this happens, numerous studies have been conducted to uncover the reasons for people's participation in PA. Often, the measures used were not broad enough to reflect all the reasons for participation in PA. The Physical Activity and Leisure Motivation Scale (PALMS) was created to be a comprehensive tool measuring motives for participating in PA. This 40-item scale related to participation in sport and PA is designed for adolescents and adults. Five items constitute each of the eight sub-scales (mastery, enjoyment, psychological condition, physical condition, appearance, other's expectations, affiliation, competition/ego) reflecting motives for participation in PA that can be categorized as features of intrinsic and extrinsic motivation based on self-determination theory. The aim of the current study was to validate the PALMS in the cultural context of Malaysia, including to assess how well the PALMS captures the same information as the Recreational Exercise Motivation Measure (REMM).\n\n\nMETHOD\nTo do so, 502 Malaysian volunteer participants, aged 18 to 67 years (mean ± SD; 31.55 ± 11.87 years), from a variety of PA categories, including individual sports, team sports, martial arts and exercise, completed the study.\n\n\nRESULTS\nThe hypothesized 8-factor model demonstrated a good fit with the data (CMIN/DF = 2.820, NFI = 0.90, CFI = 0.91, RMSEA = 0.06). Cronbach's alpha coefficient (α = 0.79) indicated good internal consistency for the overall measure. Internal consistency for the PALMS subscales was sound, ranging from 0.78 to 0.82. The correlations between each PALMS sub-scale and the corresponding sub-scale on the validated REMM (the 73-item questionnaire from which the PALMS was developed) were also high and varied from 0.79 to 0.95. Also, test-retest reliability for the questionnaire sub-scales was between 0.78 and 0.94 over a 4-week period.\n\n\nCONCLUSIONS\nIn this sample, the PALMS demonstrated acceptable factor structure, internal consistency, test-retest reliability, and criterion validity. It was applicable to diverse physical activity contexts.",
"title": ""
},
{
"docid": "6af7bb1d2a7d8d44321a5b162c9781a2",
"text": "In this paper, we propose a deep metric learning (DML) approach for robust visual tracking under the particle filter framework. Unlike most existing appearance-based visual trackers, which use hand-crafted similarity metrics, our DML tracker learns a nonlinear distance metric to classify the target object and background regions using a feed-forward neural network architecture. Since there are usually large variations in visual objects caused by varying deformations, illuminations, occlusions, motions, rotations, scales, and cluttered backgrounds, conventional linear similarity metrics cannot work well in such scenarios. To address this, our proposed DML tracker first learns a set of hierarchical nonlinear transformations in the feed-forward neural network to project both the template and particles into the same feature space where the intra-class variations of positive training pairs are minimized and the interclass variations of negative training pairs are maximized simultaneously. Then, the candidate that is most similar to the template in the learned deep network is identified as the true target. Experiments on the benchmark data set including 51 challenging videos show that our DML tracker achieves a very competitive performance with the state-of-the-art trackers.",
"title": ""
},
{
"docid": "49c1924821c326f803cefff58ca7ab67",
"text": "Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the program being analyzed, architecture/OS specificity, being user-mode only, and lacking APIs. We present DECAF, a virtual machine based, multi-target, whole-system dynamic binary analysis framework built on top of QEMU. DECAF provides Just-In-Time Virtual Machine Introspection and a plugin architecture with a simple-to-use event-driven programming interface. DECAF implements a new instruction-level taint tracking engine at bit granularity, which exercises fine control over the QEMU Tiny Code Generator (TCG) intermediate representation to accomplish on-the-fly optimizations while ensuring that the taint propagation is sound and highly precise. We perform a formal analysis of DECAF's taint propagation rules to verify that most instructions introduce neither false positives nor false negatives. We also present three platform-neutral plugins—Instruction Tracer, Keylogger Detector, and API Tracer, to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. Implementation of DECAF consists of 9,550 lines of C++ code and 10,270 lines of C code and we evaluate DECAF using CPU2006 SPEC benchmarks and show average overhead of 605 percent for system wide tainting and 12 percent for VMI.",
"title": ""
},
{
"docid": "4f3f3873e8eb89f0665fbeb456fbf477",
"text": "STUDY DESIGN\nControlled laboratory study.\n\n\nOBJECTIVES\nTo clarify whether differences in surface stability influence trunk muscle activity.\n\n\nBACKGROUND\nLumbar stabilization exercises on unstable surfaces are performed widely. One perceived advantage in performing stabilization exercises on unstable surfaces is the potential for increased muscular demand. However, there is little evidence in the literature to help establish whether this assumption is correct.\n\n\nMETHODS\nNine healthy male subjects performed lumbar stabilization exercises. Pairs of intramuscular fine-wire or surface electrodes were used to record the electromyographic signal amplitude of the rectus abdominis, the external obliques, the transversus abdominis, the erector spinae, and lumbar multifidus. Five exercises were performed on the floor and on an unstable surface: elbow-toe, hand-knee, curl-up, side bridge, and back bridge. The EMG data were normalized as the percentage of the maximum voluntary contraction, and data between doing each exercise on the stable versus unstable surface were compared using a Wilcoxon signed-rank test.\n\n\nRESULTS\nWith the elbow-toe exercise, the activity level for all muscles was enhanced when performed on the unstable surface. When performing the hand-knee and side bridge exercises, activity level of the more global muscles was enhanced when performed on an unstable surface. Performing the curl-up exercise on an unstable surface, increased the activity of the external obliques but reduced transversus abdominis activation.\n\n\nCONCLUSION\nThis study indicates that lumbar stabilization exercises on an unstable surface enhanced the activities of trunk muscles, except for the back bridge exercise.",
"title": ""
},
{
"docid": "87614469fe3251a547fe5795dd255230",
"text": "Automatic detecting and counting vehicles in unsupervised video on highways is a very challenging problem in computer vision with important practical applications such as to monitor activities at traffic intersections for detecting congestions, and then predict the traffic flow which assists in regulating traffic. Manually reviewing the large amount of data they generate is often impractical. The background subtraction and image segmentation based on morphological transformation for tracking and counting vehicles on highways is proposed. This algorithm uses erosion followed by dilation on various frames. Proposed algorithm segments the image by preserving important edges which improves the adaptive background mixture model and makes the system learn faster and more accurately, as well as adapt effectively to changing environments.",
"title": ""
},
{
"docid": "728a06d89a57261cf0560ec3513f2ae6",
"text": "This paper reports on our review of published research relating to how teams work together to execute Big Data projects. Our findings suggest that there is no agreed upon standard for executing these projects but that there is a growing research focus in this area and that an improved process methodology would be useful. In addition, our synthesis also provides useful suggestions to help practitioners execute their projects, specifically our identified list of 33 important success factors for executing Big Data efforts, which are grouped by our six identified characteristics of a mature Big Data organization.",
"title": ""
},
{
"docid": "ab2c4d5317d2e10450513283c21ca6d3",
"text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.",
"title": ""
},
{
"docid": "49b0ba019f6f968804608aeacec2a959",
"text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.",
"title": ""
},
{
"docid": "a4f5fcd7aab7d1d48f462f680336c905",
"text": "The authors experienced a case with ocular ischemia with hypotony following injection of a dermal filler for augmentation rhinoplasty. Immediately after injection, the patient demonstrated a permanent visual loss with typical fundus features of central retinal artery occlusion. Multiple crusted ulcerative patches around the nose and left periorbit developed, and the left eye became severely inflamed, ophthalmoplegic, and hypotonic. Signs of anterior and posterior segment ischemia were observed including severe cornea edema, iris atrophy, and chorioretinal swelling. The retrograde arterial embolization of hyaluronic acid gel from vascular branches of nasal tip to central retinal artery and long posterior ciliary artery was highly suspicious. After 6 months of follow up, skin lesions and eyeball movement became normalized, but progressive exudative and tractional retinal detachment was causing phthisis bulbi.",
"title": ""
},
{
"docid": "d7e61562c913fa9fa265fd8ef5288cb5",
"text": "For our project, we consider the task of classifying the gender of an author of a blog, novel, tweet, post or comment. Previous attempts have considered traditional NLP models such as bag of words and n-grams to capture gender differences in authorship, and apply it to a specific media (e.g. formal writing, books, tweets, or blogs). Our project takes a novel approach by applying deep learning models developed by Lai et al to directly learn the gender of blog authors. We further refine their models and present a new deep learning model, the Windowed Recurrent Convolutional Neural Network (WRCNN), for gender classification. Our approaches are tested and trained on several datasets: a blog dataset used by Mukherjee et al, and two datasets representing 19th and 20th century authors, respectively. We report an accuracy of 86% on the blog dataset with our WRCNN model, comparable with state-of-the-art implementations.",
"title": ""
},
{
"docid": "0344917c6b44b85946313957a329bc9c",
"text": "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm.",
"title": ""
},
{
"docid": "e9f9a7c506221bacf966808f54c4f056",
"text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.",
"title": ""
},
{
"docid": "348488fc6dd8cea52bd7b5808209c4c0",
"text": "Information Technology (IT) within Secretariat General of The Indonesian House of Representatives has important role to support the Member of Parliaments (MPs) duties and functions and therefore needs to be well managed to become enabler in achieving organization goals. In this paper, IT governance at Secretariat General of The Indonesian House of Representatives is evaluated using COBIT 5 framework to get their current capabilities level which then followed by recommendations to improve their level. The result of evaluation shows that IT governance process of Secretariat General of The Indonesian House of Representatives is 1.1 (Performed Process), which means that IT processes have been implemented and achieved their purpose. Recommendations for process improvement are derived based on three criteria (Stakeholder's support, IT human resources, and Achievement target time) resulting three processes in COBIT 5 that need to be prioritized: APO13 (Manage Security), BAI01 (Manage Programmes and Projects), and EDM01 (Ensure Governance Framework Setting and Maintenance).",
"title": ""
},
{
"docid": "ed8fef21796713aba1a6375a840c8ba3",
"text": "PURPOSE\nThe novel self-paced maximal-oxygen-uptake (VO2max) test (SPV) may be a more suitable alternative to traditional maximal tests for elite athletes due to the ability to self-regulate pace. This study aimed to examine whether the SPV can be administered on a motorized treadmill.\n\n\nMETHODS\nFourteen highly trained male distance runners performed a standard graded exercise test (GXT), an incline-based SPV (SPVincline), and a speed-based SPV (SPVspeed). The GXT included a plateau-verification stage. Both SPV protocols included 5×2-min stages (and a plateau-verification stage) and allowed for self-pacing based on fixed increments of rating of perceived exertion: 11, 13, 15, 17, and 20. The participants varied their speed and incline on the treadmill by moving between different marked zones in which the tester would then adjust the intensity.\n\n\nRESULTS\nThere was no significant difference (P=.319, ES=0.21) in the VO2max achieved in the SPVspeed (67.6±3.6 mL·kg(-1)·min(-1), 95%CI=65.6-69.7 mL·kg(-1)·min(-1)) compared with that achieved in the GXT (68.6±6.0 mL·kg(-1)·min(-1), 95%CI=65.1-72.1 mL·kg(-1)·min(-1)). Participants achieved a significantly higher VO2max in the SPVincline (70.6±4.3 mL·kg(-1)·min(-1), 95%CI=68.1-73.0 mL·kg(-1)·min(-1)) than in either the GXT (P=.027, ES=0.39) or SPVspeed (P=.001, ES=0.76).\n\n\nCONCLUSIONS\nThe SPVspeed protocol produces VO2max values similar to those obtained in the GXT and may represent a more appropriate and athlete-friendly test that is more oriented toward the variable speed found in competitive sport.",
"title": ""
},
{
"docid": "f7dd0d86e674e41903fac0badb3686b9",
"text": "Context. Software defect prediction aims to reduce the large costs involved with faults in a software system. A wide range of traditional software metrics have been evaluated as potential defect indicators. These traditional metrics are derived from the source code or from the software development process. Studies have shown that no metric clearly out performs another and identifying defect-prone code using traditional metrics has reached a performance ceiling. Less traditional metrics have been studied, with these metrics being derived from the natural language of the source code. These newer, less traditional and finer grained metrics have shown promise within defect prediction. Aims. The aim of this dissertation is to study the relationship between short Java constructs and the faultiness of source code. To study this relationship this dissertation introduces the concept of a Java sequence and Java code snippet. Sequences are created by using the Java abstract syntax tree. The ordering of the nodes within the abstract syntax tree creates the sequences, while small subsequences of this sequence are the code snippets. The dissertation tries to find a relationship between the code snippets and faulty and non-faulty code. This dissertation also looks at the evolution of the code snippets as a system matures, to discover whether code snippets significantly associated with faulty code change over time. Methods. To achieve the aims of the dissertation, two main techniques have been developed; finding defective code and extracting Java sequences and code snippets. Finding defective code has been split into two areas finding the defect fix and defect insertion points. To find the defect fix points an implementation of the bug-linking algorithm has been developed, called S + e . Two algorithms were developed to extract the sequences and the code snippets. The code snippets are analysed using the binomial test to find which ones are significantly associated with faulty and non-faulty code. These techniques have been performed on five different Java datasets; ArgoUML, AspectJ and three releases of Eclipse.JDT.core Results. There are significant associations between some code snippets and faulty code. Frequently occurring fault-prone code snippets include those associated with identifiers, method calls and variables. There are some code snippets significantly associated with faults that are always in faulty code. There are 201 code snippets that are snippets significantly associated with faults across all five of the systems. The technique is unable to find any significant associations between code snippets and non-faulty code. The relationship between code snippets and faults seems to change as the system evolves with more snippets becoming fault-prone as Eclipse.JDT.core evolved over the three releases analysed. Conclusions. This dissertation has introduced the concept of code snippets into software engineering and defect prediction. The use of code snippets offers a promising approach to identifying potentially defective code. Unlike previous approaches, code snippets are based on a comprehensive analysis of low level code features and potentially allow the full set of code defects to be identified. Initial research into the relationship between code snippets and faults has shown that some code constructs or features are significantly related to software faults. The significant associations between code snippets and faults has provided additional empirical evidence to some already researched bad constructs within defect prediction. The code snippets have shown that some constructs significantly associated with faults are located in all five systems, and although this set is small finding any defect indicators that transfer successfully from one system to another is rare.",
"title": ""
},
{
"docid": "861b170e5da6941e2cf55d8b7d9799b6",
"text": "Scaling wireless charging to power levels suitable for heavy duty passenger vehicles and mass transit bus requires indepth assessment of wireless power transfer (WPT) architectures, component sizing and stress, package size, electrical insulation requirements, parasitic loss elements, and cost minimization. It is demonstrated through an architecture comparison that the voltage rating of the power inverter semiconductors will be higher for inductor-capacitor-capacitor (LCC) than for a more conventional Series-Parallel (S-P) tuning. Higher voltage at the source inverter dc bus facilitates better utilization of the semiconductors, hence lower cost. Electrical and thermal stress factors of the passive components are explored, in particular the compensating capacitors and coupling coils. Experimental results are presented for a prototype, precommercial, 10 kW wireless charger designed for heavy duty (HD) vehicle application. Results are in good agreement with theory and validate a design that minimizes component stress.",
"title": ""
},
{
"docid": "8bae8e7937f4c9a492a7030c62d7d9f4",
"text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.",
"title": ""
},
{
"docid": "8005d1bd2065a14097cf5da85b941fc1",
"text": "The American Psychological Association's (APA's) stance on the psychological maturity of adolescents has been criticized as inconsistent. In its Supreme Court amicus brief in Roper v. Simmons (2005), which abolished the juvenile death penalty, APA described adolescents as developmentally immature. In its amicus brief in Hodgson v. Minnesota (1990), however, which upheld adolescents' right to seek an abortion without parental involvement, APA argued that adolescents are as mature as adults. The authors present evidence that adolescents demonstrate adult levels of cognitive capability earlier than they evince emotional and social maturity. On the basis of this research, the authors argue that it is entirely reasonable to assert that adolescents possess the necessary skills to make an informed choice about terminating a pregnancy but are nevertheless less mature than adults in ways that mitigate criminal responsibility. The notion that a single line can be drawn between adolescence and adulthood for different purposes under the law is at odds with developmental science. Drawing age boundaries on the basis of developmental research cannot be done sensibly without a careful and nuanced consideration of the particular demands placed on the individual for \"adult-like\" maturity in different domains of functioning.",
"title": ""
}
] |
scidocsrr
|
0d8bb5e4e9f9c79d2ac85ba47e2e990c
|
Image Segmentation using Fuzzy C Means Clustering: A survey
|
[
{
"docid": "2c8e7bfcd41924d0fe8f66166d366751",
"text": "-Many image segmentation techniques are available in the literature. Some of these techniques use only the gray level histogram, some use spatial details while others use fuzzy set theoretic approaches. Most of these techniques are not suitable for noisy environments. Some works have been done using the Markov Random Field (MRF) model which is robust to noise, but is computationally involved. Neural network architectures which help to get the output in real time because of their parallel processing ability, have also been used for segmentation and they work fine even when the noise level is very high. The literature on color image segmentation is not that rich as it is for gray tone images. This paper critically reviews and summarizes some of these techniques. Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches. Adequate attention is paid to segmentation of range images and magnetic resonance images. It also addresses the issue of quantitative evaluation of segmentation results. Image segmentation Fuzzy sets Markov Random Field Thresholding Edge detection Clustering Relaxation",
"title": ""
}
] |
[
{
"docid": "9c0d65ee42ccfaa291b576568bad59e0",
"text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.",
"title": ""
},
{
"docid": "e50b074abe37cc8caec8e3922347e0d9",
"text": "Subjectivity and sentiment analysis (SSA) has recently gained considerable attention, but most of the resources and systems built so far are tailored to English and other Indo-European languages. The need for designing systems for other languages is increasing, especially as blogging and micro-blogging websites become popular throughout the world. This paper surveys different techniques for SSA for Arabic. After a brief synopsis about Arabic, we describe the main existing techniques and test corpora for Arabic SSA that have been introduced in the literature.",
"title": ""
},
{
"docid": "6afad353d7dec9fce0e5e4531fd08cf3",
"text": "This paper describes some new developments in the application of power electronics to automotive power generation and control. A new load-matching technique is introduced that uses a simple switched-mode rectifier to achieve dramatic increases in peak and average power output from a conventional Lundell alternator, along with substantial improvements in efficiency. Experimental results demonstrate these capability improvements. Additional performance and functionality improvements of particular value for high-voltage (e.g., 42 V) alternators are also demonstrated. Tight load-dump transient suppression can be achieved using this new architecture. It is also shown that the alternator system can be used to implement jump charging (the charging of the high-voltage system battery from a low-voltage source). Dual-output extensions of the technique (e.g., 42/14 V) are also introduced. The new technology preserves the simplicity and low cost of conventional alternator designs, and can be implemented within the existing manufacturing infrastructure.",
"title": ""
},
{
"docid": "b09cacfb35cd02f6a5345c206347c6ae",
"text": "Facebook, as one of the most popular social networking sites among college students, provides a platform for people to manage others' impressions of them. People tend to present themselves in a favorable way on their Facebook profile. This research examines the impact of using Facebook on people's perceptions of others' lives. It is argued that those with deeper involvement with Facebook will have different perceptions of others than those less involved due to two reasons. First, Facebook users tend to base judgment on examples easily recalled (the availability heuristic). Second, Facebook users tend to attribute the positive content presented on Facebook to others' personality, rather than situational factors (correspondence bias), especially for those they do not know personally. Questionnaires, including items measuring years of using Facebook, time spent on Facebook each week, number of people listed as their Facebook \"friends,\" and perceptions about others' lives, were completed by 425 undergraduate students taking classes across various academic disciplines at a state university in Utah. Surveys were collected during regular class period, except for two online classes where surveys were submitted online. The multivariate analysis indicated that those who have used Facebook longer agreed more that others were happier, and agreed less that life is fair, and those spending more time on Facebook each week agreed more that others were happier and had better lives. Furthermore, those that included more people whom they did not personally know as their Facebook \"friends\" agreed more that others had better lives.",
"title": ""
},
{
"docid": "23ffdf5e7797e7f01c6d57f1e5546026",
"text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.",
"title": ""
},
{
"docid": "71819107f543aa2b20b070e322cf1bbb",
"text": "Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.",
"title": ""
},
{
"docid": "857e9430ebc5cf6aad2737a0ce10941e",
"text": "Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.",
"title": ""
},
{
"docid": "166ea8466f5debc7c09880ba17c819e1",
"text": "Lymphoepithelioma-like carcinoma (LELCA) of the urinary bladder is a rare variant of bladder cancer characterized by a malignant epithelial component densely infiltrated by lymphoid cells. It is characterized by indistinct cytoplasmic borders and a syncytial growth pattern. These neoplasms deserve recognition and attention, chiefly because they may be responsive to chemotherapy. We report on the clinicopathologic features of 13 cases of LELCA recorded since 1981. The chief complaint in all 13 patients was hematuria. Their ages ranged from 58 years to 82 years. All tumors were muscle invasive. A significant lymphocytic reaction was present in all of these tumors. There were three pure LELCA and six predominant LELCA with a concurrent transitional cell carcinoma (TCC). The remainder four cases had a focal LELCA component admixed with TCC. Immunohistochemistry showed LELCA to be reactive against epithelial membrane antigen and several cytokeratins (CKs; AE1/AE3, AE1, AE3, CK7, and CK8). CK20 and CD44v6 stained focally. The lymphocytic component was composed of a mixture of T and B cells intermingled with some dendritic cells and histiocytes. Latent membrane protein 1 (LMP1) immunostaining and in situ hybridization for Epstein-Barr virus were negative in all 13 cases. DNA ploidy of these tumors gave DNA histograms with diploid peaks (n=7) or non-diploid peaks (aneuploid or tetraploid; n=6). All patients with pure and 66% with predominant LELCA were alive, while all patients having focal LELCA died of disease. Our data suggest that pure and predominant LELCA of the bladder appear to be morphologically and clinically different from other bladder (undifferentiated and poorly differentiated conventional TCC) carcinomas and should be recognized as separate clinicopathological variants of TCC with heavy lymphocytic reaction relevant in patient management.",
"title": ""
},
{
"docid": "6d50ff00babb00d36a30fdc769091b7e",
"text": "The purpose of Advanced Driver Assistance Systems (ADAS) is that driver error will be reduced or even eliminated, and efficiency in traffic and transport is enhanced. The benefits of ADAS implementations are potentially considerable because of a significant decrease in human suffering, economical cost and pollution. However, there are also potential problems to be expected, since the task of driving a ordinary motor vehicle is changing in nature, in the direction of supervising a (partly) automated moving vehicle.",
"title": ""
},
{
"docid": "bb295b25353ecdf85a104ee5a928c313",
"text": "There is growing conviction that the future of computing depends on our ability to exploit big data on theWeb to enhance intelligent systems. This includes encyclopedic knowledge for factual details, common sense for human-like reasoning and natural language generation for smarter communication. With recent chatbots conceivably at the verge of passing the Turing Test, there are calls for more common sense oriented alternatives, e.g., the Winograd Schema Challenge. The Aristo QA system demonstrates the lack of common sense in current systems in answering fourth-grade science exam questions. On the language generation front, despite the progress in deep learning, current models are easily confused by subtle distinctions that may require linguistic common sense, e.g.quick food vs. fast food. These issues bear on tasks such as machine translation and should be addressed using common sense acquired from text. Mining common sense from massive amounts of data and applying it in intelligent systems, in several respects, appears to be the next frontier in computing. Our brief overview of the state of Commonsense Knowledge (CSK) in Machine Intelligence provides insights into CSK acquisition, CSK in natural language, applications of CSK and discussion of open issues. This paper provides a report of a tutorial at a recent conference with a brief survey of topics.",
"title": ""
},
{
"docid": "066eef8e511fac1f842c699f8efccd6b",
"text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "c207f2c0dfc1ecee332df70ec5810459",
"text": "Hierarchical organization-the recursive composition of sub-modules-is ubiquitous in biological networks, including neural, metabolic, ecological, and genetic regulatory networks, and in human-made systems, such as large organizations and the Internet. To date, most research on hierarchy in networks has been limited to quantifying this property. However, an open, important question in evolutionary biology is why hierarchical organization evolves in the first place. It has recently been shown that modularity evolves because of the presence of a cost for network connections. Here we investigate whether such connection costs also tend to cause a hierarchical organization of such modules. In computational simulations, we find that networks without a connection cost do not evolve to be hierarchical, even when the task has a hierarchical structure. However, with a connection cost, networks evolve to be both modular and hierarchical, and these networks exhibit higher overall performance and evolvability (i.e. faster adaptation to new environments). Additional analyses confirm that hierarchy independently improves adaptability after controlling for modularity. Overall, our results suggest that the same force-the cost of connections-promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability. In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings will also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.",
"title": ""
},
{
"docid": "37ed4c0703266525a7d62ca98dd65e0f",
"text": "Social cognition in humans is distinguished by psychological processes that allow us to make inferences about what is going on inside other people-their intentions, feelings, and thoughts. Some of these processes likely account for aspects of human social behavior that are unique, such as our culture and civilization. Most schemes divide social information processing into those processes that are relatively automatic and driven by the stimuli, versus those that are more deliberative and controlled, and sensitive to context and strategy. These distinctions are reflected in the neural structures that underlie social cognition, where there is a recent wealth of data primarily from functional neuroimaging. Here I provide a broad survey of the key abilities, processes, and ways in which to relate these to data from cognitive neuroscience.",
"title": ""
},
{
"docid": "98729fc6a6b95222e6a6a12aa9a7ded7",
"text": "What good is self-control? We incorporated a new measure of individual differences in self-control into two large investigations of a broad spectrum of behaviors. The new scale showed good internal consistency and retest reliability. Higher scores on self-control correlated with a higher grade point average, better adjustment (fewer reports of psychopathology, higher self-esteem), less binge eating and alcohol abuse, better relationships and interpersonal skills, secure attachment, and more optimal emotional responses. Tests for curvilinearity failed to indicate any drawbacks of so-called overcontrol, and the positive effects remained after controlling for social desirability. Low self-control is thus a significant risk factor for a broad range of personal and interpersonal problems.",
"title": ""
},
{
"docid": "5c64b25ae243ad010ee15e39e5d824e3",
"text": "This paper examines the work and interactions between camera operators and a vision mixer during an ice hockey match, and presents an interaction analysis using video data. We analyze video-mediated indexical gestures in the collaborative production of live sport on television between distributed team members. The findings demonstrate how video forms the topic, resource and product of collabora-tion: whilst it shapes the nature of the work (editing), it is simultaneously also the primary resource for supporting mutual orientation and negotiating shot transitions between remote participants (co-ordination), as well as its end prod-uct (broadcast). Our analysis of current professional activi-ties is used to develop implications for the design of future services for live collaborative video production.",
"title": ""
},
{
"docid": "ec85dafd4c0f04d3e573941b397b3f10",
"text": "The future of communication resides in Internet of Things, which is certainly the most sought after technology today. The applications of IoT are diverse, and range from ordinary voice recognition to critical space programmes. Recently, a lot of efforts have been made to design operating systems for IoT devices because neither traditional Windows/Unix, nor the existing Real Time Operating Systems are able to meet the demands of heterogeneous IoT applications. This paper presents a survey of operating systems that have been designed so far for IoT devices and also outlines a generic framework that brings out the essential features desired in an OS tailored for IoT devices.",
"title": ""
},
{
"docid": "5ee5f4450ecc89b684e90e7b846f8365",
"text": "This study scrutinizes the predictive relationship between three referral channels, search engine, social medial, and third-party advertising, and online consumer search and purchase. The results derived from vector autoregressive models suggest that the three channels have differential predictive relationship with sale measures. The predictive power of the three channels is also considerably different in referring customers among competing online shopping websites. In the short run, referrals from all three channels have a significantly positive predictive relationship with the focal online store’s sales amount and volume, but having no significant relationship with conversion. Only referrals from search engines to the rival website have a significantly negative predictive relationship with the focal website’s sales and volume. In the long run, referrals from all three channels have a significant positive predictive relationship with the focal website’s sales, conversion and sales volume. In contrast, referrals from all three channels to the competing online stores have a significant negative predictive relationship with the focal website’s sales, conversion and sales volume. Our results also show that search engine referrals explains the most of the variance in sales, while social media referrals explains the most of the variance in conversion and third party ads referrals explains the most of the variance in sales volume. This study offers new insights for IT and marketing practitioners in respect to better and deeper understanding on marketing attribution and how different channels perform in order to optimize the media mix and overall performance.",
"title": ""
},
{
"docid": "1615e93f027c6f6f400ce1cc7a1bb8aa",
"text": "In the recent years, we have witnessed the rapid adoption of social media platforms, such as Twitter, Facebook and YouTube, and their use as part of the everyday life of billions of people worldwide. Given the habit of people to use these platforms to share thoughts, daily activities and experiences it is not surprising that the amount of user generated content has reached unprecedented levels, with a substantial part of that content being related to real-world events, i.e. actions or occurrences taking place at a certain time and location. Figure 1 illustrates three main categories of events along with characteristic photos from Flickr for each of them: a) news-related events, e.g. demonstrations, riots, public speeches, natural disasters, terrorist attacks, b) entertainment events, e.g. sports, music, live shows, exhibitions, festivals, and c) personal events, e.g. wedding, birthday, graduation ceremonies, vacations, and going out. Depending on the event, different types of multimedia and social media platform are more popular. For instance, news-related events are extensively published in the form of text updates, images and videos on Twitter and YouTube, entertainment and social events are often captured in the form of images and videos and shared on Flickr and YouTube, while personal events are mostly represented by images that are shared on Facebook and Instagram. Given the key role of events in our life, the task of annotating and organizing social media content around them is of crucial importance for ensuring real-time and future access to multimedia content about an event of interest. However, the vast amount of noisy and non-informative social media posts, in conjunction with their large scale, makes that task very challenging. For instance, in the case of popular events that are covered live on Twitter, there are often millions of posts referring to a single event, as in the case of the World Cup Final 2014 between Brazil and Germany, which produced approximately 32.1 million tweets with a rate of 618,725 tweets per minute. Processing, aggregating and selecting the most informative, entertaining and representative tweets among such a large dataset is a very challenging multimedia retrieval problem. In other",
"title": ""
},
{
"docid": "7203aedbdb4c3b42c34dafdefe082b63",
"text": "We discuss silver ink as a low cost option for manufacturing RFID tags at ultra high frequencies (UHF). An analysis of two different RFID tag antennas, made from silver ink and from copper, is presented at UHF. The influence of each material on tag performance is discussed along with simulation results and measurement data which are in good agreement. It is observed that RFID tag performance depends both on material and on the shape of the antenna. For some classes of antennas, silver ink with higher conductivity performs as well as copper, which makes it an attractive low cost alternative material to copper for RFID tag antennas.",
"title": ""
},
{
"docid": "e35194cb3fdd3edee6eac35c45b2da83",
"text": "The availability of high-resolution Digital Surface Models of coastal environments is of increasing interest for scientists involved in the study of the coastal system processes. Among the range of terrestrial and aerial methods available to produce such a dataset, this study tests the utility of the Structure from Motion (SfM) approach to low-altitude aerial imageries collected by Unmanned Aerial Vehicle (UAV). The SfM image-based approach was selected whilst searching for a rapid, inexpensive, and highly automated method, able to produce 3D information from unstructured aerial images. In particular, it was used to generate a dense point cloud and successively a high-resolution Digital Surface Models (DSM) of a beach dune system in Marina di Ravenna (Italy). The quality of the elevation dataset produced by the UAV-SfM was initially evaluated by comparison with point cloud generated by a Terrestrial Laser Scanning (TLS) surveys. Such a comparison served to highlight an average difference in the vertical values of 0.05 m (RMS = 0.19 m). However, although the points cloud comparison is the best approach to investigate the absolute or relative correspondence between UAV and TLS OPEN ACCESS Remote Sens. 2013, 5 6881 methods, the assessment of geomorphic features is usually based on multi-temporal surfaces analysis, where an interpolation process is required. DSMs were therefore generated from UAV and TLS points clouds and vertical absolute accuracies assessed by comparison with a Global Navigation Satellite System (GNSS) survey. The vertical comparison of UAV and TLS DSMs with respect to GNSS measurements pointed out an average distance at cm-level (RMS = 0.011 m). The successive point by point direct comparison between UAV and TLS elevations show a very small average distance, 0.015 m, with RMS = 0.220 m. Larger values are encountered in areas where sudden changes in topography are present. The UAV-based approach was demonstrated to be a straightforward one and accuracy of the vertical dataset was comparable with results obtained by TLS technology.",
"title": ""
}
] |
scidocsrr
|
84ccd2ad9d82da02eecfcea23401f585
|
Learning of Coordination Policies for Robotic Swarms
|
[
{
"docid": "1847cce79f842a7d01f1f65721c1f007",
"text": "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.",
"title": ""
}
] |
[
{
"docid": "97a13a2a11db1b67230ab1047a43e1d6",
"text": "Road detection from the perspective of moving vehicles is a challenging issue in autonomous driving. Recently, many deep learning methods spring up for this task, because they can extract high-level local features to find road regions from raw RGB data, such as convolutional neural networks and fully convolutional networks (FCNs). However, how to detect the boundary of road accurately is still an intractable problem. In this paper, we propose siamesed FCNs (named “s-FCN-loc”), which is able to consider RGB-channel images, semantic contours, and location priors simultaneously to segment the road region elaborately. To be specific, the s-FCN-loc has two streams to process the original RGB images and contour maps, respectively. At the same time, the location prior is directly appended to the siamesed FCN to promote the final detection performance. Our contributions are threefold: 1) An s-FCN-loc is proposed that learns more discriminative features of road boundaries than the original FCN to detect more accurate road regions. 2) Location prior is viewed as a type of feature map and directly appended to the final feature map in s-FCN-loc to promote the detection performance effectively, which is easier than other traditional methods, namely, different priors for different inputs (image patches). 3) The convergent speed of training s-FCN-loc model is 30% faster than the original FCN because of the guidance of highly structured contours. The proposed approach is evaluated on the KITTI road detection benchmark and one-class road detection data set, and achieves a competitive result with the state of the arts.",
"title": ""
},
{
"docid": "46a4e4dbcb9b6656414420a908b51cc5",
"text": "We review Bacry and Lévy-Leblond’s work on possible kinematics as applied to 2-dimensional spacetimes, as well as the nine types of 2-dimensional Cayley–Klein geometries, illustrating how the Cayley–Klein geometries give homogeneous spacetimes for all but one of the kinematical groups. We then construct a two-parameter family of Clifford algebras that give a unified framework for representing both the Lie algebras as well as the kinematical groups, showing that these groups are true rotation groups. In addition we give conformal models for these spacetimes.",
"title": ""
},
{
"docid": "2b3335d6fb1469c4848a201115a78e2c",
"text": "Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.",
"title": ""
},
{
"docid": "e561ff9b3f836c0d005db1ffdacd6f56",
"text": "A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false information campaigns with targeted manipulation of public opinion on specific topics. These false information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one, and needs increased public awareness, as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this paper, we make a step in this direction by providing a typology of the Web’s false information ecosystem, comprising various types of false information, actors, and their motives. We report a comprehensive overview of existing research on the false information ecosystem by identifying several lines of work: 1) how the public perceives false information; 2) understanding the propagation of false information; 3) detecting and containing false information on the Web; and 4) false information on the political stage. In this work, we pay particular attention to political false information as: 1) it can have dire consequences to the community (e.g., when election results are mutated) and 2) previous work show that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false information dissemination on the Web.",
"title": ""
},
{
"docid": "b759613b1eedd29d32fbbc118767b515",
"text": "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.",
"title": ""
},
{
"docid": "473d8cbcd597c961819c5be6ab2e658e",
"text": "Mobile terrestrial laser scanners (MTLS), based on light detection and ranging sensors, are used worldwide in agricultural applications. MTLS are applied to characterize the geometry and the structure of plants and crops for technical and scientific purposes. Although MTLS exhibit outstanding performance, their high cost is still a drawback for most agricultural applications. This paper presents a low-cost alternative to MTLS based on the combination of a Kinect v2 depth sensor and a real time kinematic global navigation satellite system (GNSS) with extended color information capability. The theoretical foundations of this system are exposed along with some experimental results illustrating their performance and limitations. This study is focused on open-field agricultural applications, although most conclusions can also be extrapolated to similar outdoor uses. The developed Kinect-based MTLS system allows to select different acquisition frequencies and fields of view (FOV), from one to 512 vertical slices. The authors conclude that the better performance is obtained when a FOV of a single slice is used, but at the price of a very low measuring speed. With that particular configuration, plants, crops, and objects are reproduced accurately. Future efforts will be directed to increase the scanning efficiency by improving both the hardware and software components and to make it feasible using both partial and full FOV.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "dc8f5af4c7681fa2065a11c26cf05e2b",
"text": "Bitcoin is the first e-cash system to see widespread adoption. While Bitcoin offers the potential for new types of financial interaction, it has significant limitations regarding privacy. Specifically, because the Bitcoin transaction log is completely public, users' privacy is protected only through the use of pseudonyms. In this paper we propose Zerocoin, a cryptographic extension to Bitcoin that augments the protocol to allow for fully anonymous currency transactions. Our system uses standard cryptographic assumptions and does not introduce new trusted parties or otherwise change the security model of Bitcoin. We detail Zerocoin's cryptographic construction, its integration into Bitcoin, and examine its performance both in terms of computation and impact on the Bitcoin protocol.",
"title": ""
},
{
"docid": "c4caa735537ccd82c83a330fa85e142d",
"text": "We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. To this end, we introduce a new way to fuse modality-specific product embeddings into a joint product embedding, in order to leverage both product content information, such as textual descriptions and images, and product collaborative filtering signal. By introducing the fusion step at the very end of our architecture, we are able to train each modality separately, allowing us to keep a modular architecture that is preferable in real-world recommendation deployments. We analyze our performance on normal and hard recommendation setups such as cold-start and cross-category recommendations and achieve good performance on a large product shopping dataset.",
"title": ""
},
{
"docid": "8b3a58dc4f3aceae7723c17895775a1a",
"text": "While the technology acceptance model (TAM), introduced in 1986, continues to be the most widely applied theoretical model in the IS field, few previous efforts examined its accomplishments and limitations. This study traces TAM’s history, investigates its findings, and cautiously predicts its future trajectory. One hundred and one articles published by leading IS journals and conferences in the past eighteen years are examined and summarized. An openended survey of thirty-two leading IS researchers assisted in critically examining TAM and specifying future directions.",
"title": ""
},
{
"docid": "4107e9288ea64d039211acf48a091577",
"text": "The trisomy 18 syndrome can result from a full, mosaic, or partial trisomy 18. The main clinical findings of full trisomy 18 consist of prenatal and postnatal growth deficiency, characteristic facial features, clenched hands with overriding fingers and nail hypoplasia, short sternum, short hallux, major malformations, especially of the heart, andprofound intellectual disability in the surviving older children. The phenotype of partial trisomy 18 is extremely variable. The aim of this article is to systematically review the scientific literature on patients with partial trisomy 18 in order to identify regions of chromosome 18 that may be responsible for the specific clinical features of the trisomy 18 syndrome. We confirmed that trisomy of the short arm of chromosome 18 does not seem to cause the major features. However, we found candidate regions on the long arm of chromosome 18 for some of the characteristic clinical features, and a thus a phenotypic map is proposed. Our findings confirm the hypothesis that single critical regions/candidate genes are likely to be responsible for specific characteristics of the syndrome, while a single critical region for the whole Edwards syndrome phenotype is unlikely to exist.",
"title": ""
},
{
"docid": "a7ac6803295b7359f5c8c0fcdd26e0e7",
"text": "The Internet of Things (IoT), the idea of getting real-world objects connected with each other, will change the way users organize, obtain and consume information radically. Internet of Things (IoT) enables various applications (crop growth monitoring and selection, irrigation decision support, etc.) in Digital Agriculture domain. The Wireless Sensors Network (WSN) is widely used to build decision support systems. These systems overcomes many problems in the real-world. One of the most interesting fields having an increasing need of decision support systems is Precision Agriculture (PA). Through sensor networks, agriculture can be connected to the IoT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of this approach which provides real-time information about the lands and crops that will help farmers make right decisions. The major advantage is implementation of WSN in Precision Agriculture (PA) will optimize the usage of water fertilizers while maximizing the yield of the crops and also will help in analyzing the weather conditions of the field.",
"title": ""
},
{
"docid": "1d0241833add973cc7cf6117735b7a1a",
"text": "This paper describes the conception and the construction of a low cost spin coating machine incorporating inexpensive electronic components and open-source technology based on Arduino platform. We present and discuss the details of the electrical, mechanical and control parts. This system will coat thin film in a micro level thickness and the microcontroller ATM 328 circuit controls and adjusts the spinning speed. We prepare thin films with good uniformity for various thicknesses by this spin coating system. The thickness and uniformity of deposited films were verified by determining electronic absorption spectra. We show that thin film thickness depends on the spin speed in the range of 2000–3500 rpm. We compare the results obtained on TiO2 layers deposited by our developed system to those grown by using a standard commercial spin coating systems.",
"title": ""
},
{
"docid": "d6c95e47caf4e01fa5934b861a962f6e",
"text": "Whereas theoretical work suggests that deep architectures might be more efficient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pretraining. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. Answering these questions is important if learning in deep architectures is to be further improved. We attempt to shed some light on these questions through extensive simulations. The experiments confirm and clarify the advantage of unsupervised pre-training. They demonstrate the robustness of the training procedure with respect to the random initialization, the positive effect of pre-training in terms of optimization and its role as a regularizer. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples.",
"title": ""
},
{
"docid": "08c97484fe3784e2f1fd42606b915f83",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "da33a718aa9dbf6e9feaff5e63765639",
"text": " This paper introduces a new frequency-domain approach to describe the relationships (direction of information flow) between multivariate time series based on the decomposition of multivariate partial coherences computed from multivariate autoregressive models. We discuss its application and compare its performance to other approaches to the problem of determining neural structure relations from the simultaneous measurement of neural electrophysiological signals. The new concept is shown to reflect a frequency-domain representation of the concept of Granger causality.",
"title": ""
},
{
"docid": "9a4e9c73465d1026c2f5c91ec17eaf74",
"text": "Devising an expressive question taxonomy is a central problem in question generation. Through examination of a corpus of human-human taskoriented tutoring, we have found that existing question taxonomies do not capture all of the tutorial questions present in this form of tutoring. We propose a hierarchical question classification scheme for tutorial questions in which the top level corresponds to the tutor’s goal and the second level corresponds to the question type. The application of this hierarchical classification scheme to a corpus of keyboard-to-keyboard tutoring of introductory computer science yielded high inter-rater reliability, suggesting that such a scheme is appropriate for classifying tutor questions in design-oriented tutoring. We discuss numerous open issues that are highlighted by the current analysis.",
"title": ""
},
{
"docid": "db2e7cc9ea3d58e0c625684248e2ef80",
"text": "PURPOSE\nTo review applications of Ajzen's theory of planned behavior in the domain of health and to verify the efficiency of the theory to explain and predict health-related behaviors.\n\n\nMETHODS\nMost material has been drawn from Current Contents (Social and Behavioral Sciences and Clinical Medicine) from 1985 to date, together with all peer-reviewed articles cited in the publications thus identified.\n\n\nFINDINGS\nThe results indicated that the theory performs very well for the explanation of intention; an averaged R2 of .41 was observed. Attitude toward the action and perceived behavioral control were most often the significant variables responsible for this explained variation in intention. The prediction of behavior yielded an averaged R2 of .34. Intention remained the most important predictor, but in half of the studies reviewed perceived behavioral control significantly added to the prediction.\n\n\nCONCLUSIONS\nThe efficiency of the model seems to be quite good for explaining intention, perceived behavioral control being as important as attitude across health-related behavior categories. The efficiency of the theory, however, varies between health-related behavior categories.",
"title": ""
},
{
"docid": "4630ade03760cb8ec1da11b16703b3f1",
"text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.",
"title": ""
},
{
"docid": "fdbdac5f319cd46aeb73be06ed64cbb9",
"text": "Recently deep neural networks (DNNs) have been used to learn speaker features. However, the quality of the learned features is not sufficiently good, so a complex back-end model, either neural or probabilistic, has to be used to address the residual uncertainty when applied to speaker verification. This paper presents a convolutional time-delay deep neural network structure (CT-DNN) for speaker feature learning. Our experimental results on the Fisher database demonstrated that this CT-DNN can produce high-quality speaker features: even with a single feature (0.3 seconds including the context), the EER can be as low as 7.68%. This effectively confirmed that the speaker trait is largely a deterministic short-time property rather than a longtime distributional pattern, and therefore can be extracted from just dozens of frames.",
"title": ""
}
] |
scidocsrr
|
e40df438e69e0665fae60b6b5e0f60cb
|
Guided HTM: Hierarchical Topic Model with Dirichlet Forest Priors
|
[
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
},
{
"docid": "96c10ca887c0210615d16655f62665e0",
"text": "The two key challenges in hierarchical classification are to leverage the hierarchical dependencies between the class-labels for improving performance, and, at the same time maintaining scalability across large hierarchies. In this paper we propose a regularization framework for large-scale hierarchical classification that addresses both the problems. Specifically, we incorporate the hierarchical dependencies between the class-labels into the regularization structure of the parameters thereby encouraging classes nearby in the hierarchy to share similar model parameters. Furthermore, we extend our approach to scenarios where the dependencies between the class-labels are encoded in the form of a graph rather than a hierarchy. To enable large-scale training, we develop a parallel-iterative optimization scheme that can handle datasets with hundreds of thousands of classes and millions of instances and learning terabytes of parameters. Our experiments showed a consistent improvement over other competing approaches and achieved state-of-the-art results on benchmark datasets.",
"title": ""
}
] |
[
{
"docid": "30874858a0395085bbae6bab78696d97",
"text": "In recent years, open architecture motion controllers, including those for CNC machines and robots, have received much interest and support among the global control and automation community. This paper presents work done in extending a well-known and supported open-source control software called LinuxCNC for the control of a Delta robot, a translational parallel mechanism. Key features in the development process are covered and discussed and the final customized system based on LinuxCNC described.",
"title": ""
},
{
"docid": "4c3e4da0a2423a184911dfed7f4e7234",
"text": "Pseudo-relevance feedback (PRF) has been proven to be an effective query expansion strategy to improve retrieval performance. Several PRF methods have so far been proposed for many retrieval models. Recent theoretical studies of PRF methods show that most of the PRF methods do not satisfy all necessary constraints. Among all, the log-logistic model has been shown to be an effective method that satisfies most of the PRF constraints. In this paper, we first introduce two new PRF constraints. We further analyze the log-logistic feedback model and show that it does not satisfy these two constraints as well as the previously proposed \"relevance effect\" constraint. We then modify the log-logistic formulation to satisfy all these constraints. Experiments on three TREC newswire and web collections demonstrate that the proposed modification significantly outperforms the original log-logistic model, in all collections.",
"title": ""
},
{
"docid": "7d646fdb10b1ef9d332b6bb80bc40920",
"text": "Online financial textual information contains a large amount of investor sentiment, i.e. subjective assessment and discussion with respect to financial instruments. An effective solution to automate the sentiment analysis of such large amounts of online financial texts would be extremely beneficial. This paper presents a natural language processing (NLP) based pre-processing approach both for noise removal from raw online financial texts and for organizing such texts into an enhanced format that is more usable for feature extraction. The proposed approach integrates six NLP processing steps, including a developed syntactic and semantic combined negation handling algorithm, to reduce noise in the online informal text. Three-class sentiment classification is also introduced in each system implementation. Experimental results show that the proposed pre-processing approach outperforms other pre-processing methods. The combined negation handling algorithm is also evaluated against three standard negation handling approaches.",
"title": ""
},
{
"docid": "926e91c6db2cdb01da0d4795a7ce059f",
"text": "BACKGROUND\nSeveral behaviors, besides psychoactive substance ingestion, produce short-term reward that may engender persistent behavior, despite knowledge of adverse consequences, i.e., diminished control over the behavior. These disorders have historically been conceptualized in several ways. One view posits these disorders as lying along an impulsive-compulsive spectrum, with some classified as impulse control disorders. An alternate, but not mutually exclusive, conceptualization considers the disorders as non-substance or \"behavioral\" addictions.\n\n\nOBJECTIVES\nInform the discussion on the relationship between psychoactive substance and behavioral addictions.\n\n\nMETHODS\nWe review data illustrating similarities and differences between impulse control disorders or behavioral addictions and substance addictions. This topic is particularly relevant to the optimal classification of these disorders in the forthcoming fifth edition of the American Psychiatric Association Diagnostic and Statistical Manual of Mental Disorders (DSM-V).\n\n\nRESULTS\nGrowing evidence suggests that behavioral addictions resemble substance addictions in many domains, including natural history, phenomenology, tolerance, comorbidity, overlapping genetic contribution, neurobiological mechanisms, and response to treatment, supporting the DSM-V Task Force proposed new category of Addiction and Related Disorders encompassing both substance use disorders and non-substance addictions. Current data suggest that this combined category may be appropriate for pathological gambling and a few other better studied behavioral addictions, e.g., Internet addiction. There is currently insufficient data to justify any classification of other proposed behavioral addictions.\n\n\nCONCLUSIONS AND SCIENTIFIC SIGNIFICANCE\nProper categorization of behavioral addictions or impulse control disorders has substantial implications for the development of improved prevention and treatment strategies.",
"title": ""
},
{
"docid": "ce901f6509da9ab13d66056319c15bd8",
"text": "In this survey we overview graph-based clustering and its applications in computational linguistics. We summarize graph-based clustering as a five-part story: hypothesis, modeling, measure, algorithm and evaluation. We then survey three typical NLP problems in which graph-based clustering approaches have been successfully applied. Finally, we comment on the strengths and weaknesses of graph-based clustering and envision that graph-based clustering is a promising solution for some emerging NLP problems.",
"title": ""
},
{
"docid": "01683120a2199b55d8f4aaca27098a47",
"text": "As the microblogging service (such as Weibo) is becoming popular, spam becomes a serious problem of affecting the credibility and readability of Online Social Networks. Most existing studies took use of a set of features to identify spam, but without the consideration of the overlap and dependency among different features. In this study, we investigate the problem of spam detection by analyzing real spam dataset collections of Weibo and propose a novel hybrid model of spammer detection, called SDHM, which utilizing significant features, i.e. user behavior information, online social network attributes and text content characteristics, in an organic way. Experiments on real Weibo dataset demonstrate the power of the proposed hybrid model and the promising performance.",
"title": ""
},
{
"docid": "b53ca6bf9197c32fc52cc8bf80ee92f7",
"text": "Program code stored on the Ethereum blockchain is considered immutable, but this does not imply that its control flow cannot be modified. This bears the risk of loopholes whenever parties encode binding agreements in smart contracts. In order to quantify the issue, we define a heuristic indicator of control flow immutability, evaluate it based on a call graph of all smart contracts deployed on Ethereum, and find that two out of five smart contracts require trust in at least one third party. Besides, the analysis reveals that significant parts of the Ethereum blockchain are interspersed with debris from past attacks against the platform. We leverage the call graph to develop a method for data cleanup, which allows for less biased statistics of Ethereum use in practice.",
"title": ""
},
{
"docid": "99d76fafe2a238a061e67e4c5e5bea52",
"text": "F/OSS software has been described by many as a puzzle. In the past five years, it has stimulated the curiosity of scholars in a variety of fields, including economics, law, psychology, anthropology and computer science, so that the number of contributions on the subject has increased exponentially. The purpose of this paper is to provide a sufficiently comprehensive account of these contributions in order to draw some general conclusions on the state of our understanding of the phenomenon and identify directions for future research. The exercise suggests that what is puzzling about F/OSS is not so much the fact that people freely contribute to a good they make available to all, but rather the complexity of its institutional structure and its ability to organizationally evolve over time. JEL Classification: K11, L22, L23, L86, O31, O34.",
"title": ""
},
{
"docid": "468dca8012f6bc16bd3a5388dadd07b0",
"text": "Cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. In order to better understand how to facilitate the building of mobile cloud-based applications, we have surveyed existing work in mobile computing through the prism of cloud computing principles. We give a definition of mobile cloud coputing and provide an overview of the results from this review, in particular, models of mobile cloud applications. We also highlight research challenges in the area of mobile cloud computing. We conclude with recommendations for how this better understanding of mobile cloud computing can help building more powerful mobile applications.",
"title": ""
},
{
"docid": "131a866cba7a8b2e4f66f2496a80cb41",
"text": "The Python language is highly dynamic, most notably due to late binding. As a consequence, programs using Python typically run an order of magnitude slower than their C counterpart. It is also a high level language whose semantic can be made more static without much change from a user point of view in the case of mathematical applications. In that case, the language provides several vectorization opportunities that are studied in this paper, and evaluated in the context of Pythran, an ahead-of-time compiler that turns Python module into C++ meta-programs.",
"title": ""
},
{
"docid": "d3a97a5015e27e0b2a043dc03d20228b",
"text": "The exponential growth of cyber-physical systems (CPS), especially in safety-critical applications, has imposed several security threats (like manipulation of communication channels, hardware components, and associated software) due to complex cybernetics and the interaction among (independent) CPS domains. These security threats have led to the development of different static as well as adaptive detection and protection techniques on different layers of the CPS stack, e.g., cross-layer and intra-layer connectivity. This paper first presents a brief overview of various security threats at different CPS layers, their respective threat models and associated research challenges to develop robust security measures. Moreover, this paper provides a brief yet comprehensive survey of the state-of-the-art static and adaptive techniques for detection and prevention, and their inherent limitations, i.e., incapability to capture the dormant or uncertainty-based runtime security attacks. To address these challenges, this paper also discusses the intelligent security measures (using machine learning-based techniques) against several characterized attacks on different layers of the CPS stack. Furthermore, we identify the associated challenges and open research problems in developing intelligent security measures for CPS. Towards the end, we provide an overview of our project on security for smart CPS along with important analyses.",
"title": ""
},
{
"docid": "a0d1d59fc987d90e500b3963ac11b2ad",
"text": "The purpose of this paper is to present the applicability of THOMAS, an architecture specially designed to model agent-based virtual organizations, in the development of a multiagent system for managing and planning routes for clients in a mall. In order to build virtual organizations, THOMAS offers mechanisms to take into account their structure, behaviour, dynamic, norms and environment. Moreover, one of the primary characteristics of the THOMAS architecture is the use of agents with reasoning and planning capabilities. These agents can perform a dynamic reorganization when they detect changes in the environment. The proposed architecture is composed of a set of related modules that are appropriate for developing systems in highly volatile environments similar to the one presented in this study. This paper presents THOMAS as well as the results obtained after having applied the system to a case study. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9b2066a48425cee0d2e31a48e13e5456",
"text": "© 2013 Emerenciano et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Biofloc Technology (BFT): A Review for Aquaculture Application and Animal Food Industry",
"title": ""
},
{
"docid": "06856cf61207a99146782e9e6e0911ef",
"text": "Customer ratings are valuable sources to understand their satisfaction and are critical for designing better customer experiences and recommendations. The majority of customers, however, do not respond to rating surveys, which makes the result less representative. To understand overall satisfaction, this paper aims to investigate how likely customers without responses had satisfactory experiences compared to those respondents. To infer customer satisfaction of such unlabeled sessions, we propose models using recurrent neural networks (RNNs) that learn continuous representations of unstructured text conversation. By analyzing online chat logs of over 170,000 sessions from Samsung’s customer service department, we make a novel finding that while labeled sessions contributed by a small fraction of customers received overwhelmingly positive reviews, the majority of unlabeled sessions would have received lower ratings by customers. The data analytics presented in this paper not only have practical implications for helping detect dissatisfied customers on live chat services but also make theoretical contributions on discovering the level of biases in online rating platforms. ACM Reference Format: Kunwoo Park, Meeyoung Cha, and Eunhee Rhim. 2018. Positivity Bias in Customer Satisfaction Ratings. InWWW ’18 Companion: The 2018 Web Conference Companion, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3184558.3186579",
"title": ""
},
{
"docid": "3e5e9eecab5937dc1ec7ab835b045445",
"text": "Kombucha is a beverage of probable Manchurian origins obtained from fermented tea by a microbial consortium composed of several bacteria and yeasts. This mixed consortium forms a powerful symbiosis capable of inhibiting the growth of potentially contaminating bacteria. The fermentation process also leads to the formation of a polymeric cellulose pellicle due to the activity of certain strains of Acetobacter sp. The tea fermentation process by the microbial consortium was able to show an increase in certain biological activities which have been already studied; however, little information is available on the characterization of its active components and their evolution during fermentation. Studies have also reported that the use of infusions from other plants may be a promising alternative.\n\n\nPRACTICAL APPLICATION\nKombucha is a traditional fermented tea whose consumption has increased in the recent years due to its multiple functional properties such as anti-inflammatory potential and antioxidant activity. The microbiological composition of this beverage is quite complex and still more research is needed in order to fully understand its behavior. This study comprises the chemical and microbiological composition of the tea and the main factors that may affect its production.",
"title": ""
},
{
"docid": "2210176bcb0f139e3f7f7716447f3920",
"text": "Automatic metadata generation provides scalability and usability for digital libraries and their collections. Machine learning methods offer robust and adaptable automatic metadata extraction. We describe a Support Vector Machine classification-based method for metadata extraction from header part of research papers and show that it outperforms other machine learning methods on the same task. The method first classifies each line of the header into one or more of 15 classes. An iterative convergence procedure is then used to improve the line classification by using the predicted class labels of its neighbor lines in the previous round. Further metadata extraction is done by seeking the best chunk boundaries of each line. We found that discovery and use of the structural patterns of the data and domain based word clustering can improve the metadata extraction performance. An appropriate feature normalization also greatly improves the classification performance. Our metadata extraction method was originally designed to improve the metadata extraction quality of the digital libraries Citeseer [17] and EbizSearch[24]. We believe it can be generalized to other digital libraries.",
"title": ""
},
{
"docid": "b8e921733ef4ab77abcb48b0a1f04dbb",
"text": "This paper demonstrates the efficiency of kinematic redundancy used to increase the useable workspace of planar parallel mechanisms. As examples, we propose kinematically redundant schemes of the well known planar 3RRR and 3RPR mechanisms denoted as 3(P)RRR and 3(P)RPR. In both cases, a prismatic actuator is added allowing a usually fixed base joint to move linearly. Hence, reconfigurations can be performed selectively in order to avoid singularities and to affect the mechanisms' performance directly. Using an interval-based method the useable workspace, i.e. the singularity-free workspace guaranteeing a desired performance, is obtained. Due to the interval analysis any uncertainties can be implemented within the algorithm leading to practical and realistic results. It is shown that due to the additional prismatic actuator the useable workspace increases significantly. Several analysis examples clarify the efficiency of the proposed kinematically redundant mechanisms.",
"title": ""
},
{
"docid": "4465a375859cfe6ed4c242d6896a1042",
"text": "Despite tremendous variation in the appearance of visual objects, primates can recognize a multitude of objects, each in a fraction of a second, with no apparent effort. However, the brain mechanisms that enable this fundamental ability are not understood. Drawing on ideas from neurophysiology and computation, we present a graphical perspective on the key computational challenges of object recognition, and argue that the format of neuronal population representation and a property that we term 'object tangling' are central. We use this perspective to show that the primate ventral visual processing stream achieves a particularly effective solution in which single-neuron invariance is not the goal. Finally, we speculate on the key neuronal mechanisms that could enable this solution, which, if understood, would have far-reaching implications for cognitive neuroscience.",
"title": ""
},
{
"docid": "f3115abc9b159be833560ee5276c06b7",
"text": "This paper describes a strategy on learning from time series data and on using learned model for forecasting. Time series forecasting, which analyzes and predicts a variable changing over time, has received much attention due to its use for forecasting stock prices, but it can also be used for pattern recognition and data mining. Our method for learning from time series data consists of detecting patterns within the data, describing the detected patterns, clustering the patterns, and creating a model to describe the data. It uses a change-point detection method to partition a time series into segments, each of the segments is then described by an autoregressive model. Then, it partitions all the segments into clusters, each of the clusters is considered as a state for a Markov model. It then creates the transitions between states in the Markov model based on the transitions between segments as the time series progressing. Our method for using the learned model for forecasting consists of indentifying current state, forecasting trends, and adapting to changes. It uses a moving window to monitor real-time data and creates an autoregressive model for the recently observed data, which is then matched to a state of the learned Markov model. Following the transitions of the model, it forecasts future trends. It also continues to monitor real-time data and makes corrections if necessary for adapting to changes. We implemented and successfully tested the methods for an application of load balancing on a parallel computing system.",
"title": ""
},
{
"docid": "32f72bb01626c69aaf7c3464f938c2d4",
"text": "The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.",
"title": ""
}
] |
scidocsrr
|
90885f9853c39111993466d3d1402a4c
|
Neural Programming Language
|
[
{
"docid": "7b232b0ac1a4e7249b33bd54ddeba2b3",
"text": "We present an analysis of how the generalization performance (expected test set error) relates to the expected training set error for nonlinear learning systems, such as multilayer perceptrons and radial basis functions. The principal result is the following relationship (computed to second order) between the expected test set and tlaining set errors: (1) Here, n is the size of the training sample e, u;f f is the effective noise variance in the response variable( s), ,x is a regularization or weight decay parameter, and Peff(,x) is the effective number of parameters in the nonlinear model. The expectations ( ) of training set and test set errors are taken over possible training sets e and training and test sets e' respectively. The effective number of parameters Peff(,x) usually differs from the true number of model parameters P for nonlinear or regularized models; this theoretical conclusion is supported by Monte Carlo experiments. In addition to the surprising result that Peff(,x) ;/; p, we propose an estimate of (1) called the generalized prediction error (GPE) which generalizes well established estimates of prediction risk such as Akaike's F P E and AI C, Mallows Cp, and Barron's PSE to the nonlinear setting.! lCPE and Peff(>\") were previously introduced in Moody (1991). 847",
"title": ""
},
{
"docid": "430026742eb346d5a20e3e2ba34d0544",
"text": "High-order neural networks have been shown to have impressive computational, storage, and learning capabilities. This performance is because the order or structure of a high-order neural network can be tailored to the order or structure of a problem. Thus, a neural network designed for a particular class of problems becomes specialized but also very efficient in solving those problems. Furthermore, a priori knowledge, such as geometric invariances, can be encoded in high-order networks. Because this knowledge does not have to be learned, these networks are very efficient in solving problems that utilize this knowledge.",
"title": ""
}
] |
[
{
"docid": "97c3860dfb00517f744fd9504c4e7f9f",
"text": "The plastic film surface treatment load is considered as a nonlinear capacitive load, which is rather difficult for designing of an inverter. The series resonant inverter (SRI) connected to the load via transformer has been found effective for it's driving. In this paper, a surface treatment based on a pulse density modulation (PDM) and pulse frequency modulation (PFM) hybrid control scheme is described. The PDM scheme is used to regulate the output power of the inverter and the PFM scheme is used to compensate for temperature and other environmental influences on the discharge. Experimental results show that the PDM and PFM hybrid control series-resonant inverter (SRI) makes the corona discharge treatment simple and compact, thus leading to higher efficiency.",
"title": ""
},
{
"docid": "282ace724b3c9a2e8b051499ba5e4bfe",
"text": "Fog computing, being an extension to cloud computing has addressed some issues found in cloud computing by providing additional features, such as location awareness, low latency, mobility support, and so on. Its unique features have also opened a way toward security challenges, which need to be focused for making it bug-free for the users. This paper is basically focusing on overcoming the security issues encountered during the data outsourcing from fog client to fog node. We have added Shibboleth also known as security and cross domain access control protocol between fog client and fog node for improved and secure communication between the fog client and fog node. Furthermore to prove whether Shibboleth meets the security requirement needed to provide the secure outsourcing. We have also formally verified the protocol against basic security properties using high level Petri net.",
"title": ""
},
{
"docid": "e2d25382acd23c9431ccd3905d8bf13a",
"text": "Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.",
"title": ""
},
{
"docid": "8f3c0a8098ae76755b0e2f1dc9cfc8ea",
"text": "This paper presents a new approach to structural topology optimization. We represent the structural boundary by a level set model that is embedded in a scalar function of a higher dimension. Such level set models are flexible in handling complex topological changes and are concise in describing the boundary shape of the structure. Furthermore, a wellfounded mathematical procedure leads to a numerical algorithm that describes a structural optimization as a sequence of motions of the implicit boundaries converging to an optimum solution and satisfying specified constraints. The result is a 3D topology optimization technique that demonstrates outstanding flexibility of handling topological changes, fidelity of boundary representation and degree of automation. We have implemented the algorithm with the use of several robust and efficient numerical techniques of level set methods. The benefit and the advantages of the proposed method are illustrated with several 2D examples that are widely used in the recent literature of topology optimization, especially in the homogenization based methods. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "2d30da3bf7d89e8e515c7896153c2dea",
"text": "The Flexible Display Center (FDC) at Arizona State University (ASU) was founded in 2004 as a partnership between academia, industry, and government to collaborate on the development of a new generation of innovative displays and electronic circuits that are flexible, lightweight, low power, and rugged [1]. Due to the increasing need for flexible and lightweight electronic systems, FDC aims to develop materials and structural platforms that allow flexible backplane electronics to be integrated with display components that are economical for mass-production [2]. Currently, FDC is focusing on the incorporation of antenna structures, which can function cooperatively with the other flexible circuit elements. Design of flexible antennas, as a part of flexible electronic circuits, may have a very wide spectrum of applications in military and civilian wireless communication, which can allow people to wear antenna structures instead of carry them. Hence, flexible and fluidic antennas have a great potential [3]. In this paper, the design, fabrication, simulation and measurements of a bow-tie antenna with a flexible substrate is discussed. The antenna is modeled and simulated with Ansoft HFSS, and the simulations are compared with measurements performed in the Electromagnetic Anechoic Chamber (EMAC) at ASU.",
"title": ""
},
{
"docid": "f72ffa55e939b4c28498075916f937cc",
"text": "Compressed sensing is now established as an effective method for dimension reduction when the underlying signals are sparse or compressible with respect to some suitable basis or frame. One important, yet under-addressed problem regarding the compressive acquisition of analog signals is how to perform quantization. This is directly related to the important issues of how “compressed” compressed sensing is (in terms of the total number of bits one ends up using after acquiring the signal) and ultimately whether compressed sensing can be used to obtain compressed representations of suitable signals. In this paper, we propose a concrete and practicable method for performing “analog-to-information conversion”. Following a compressive signal acquisition stage, the proposed method consists of a quantization stage, based on $ \\Sigma \\Delta $ (sigma-delta) quantization, and a subsequent encoding (compression) stage that fits within the framework of compressed sensing seamlessly. We prove that, using this method, we can convert analog compressive samples to compressed digital bitstreams and decode using tractable algorithms based on convex optimization. We prove that the proposed analog-to-information converter (AIC) provides a nearly optimal encoding of sparse and compressible signals. Finally, we present numerical experiments illustrating the effectiveness of the proposed AIC.",
"title": ""
},
{
"docid": "2acb0196e14d70717836bf0b37fcb191",
"text": "Dictionaries are very useful objects for data analysis, as they enable a compact representation of large sets of objects through the combination of atoms. Dictionary-based techniques have also particularly benefited from the recent advances in machine learning, which has allowed for data-driven algorithms to take advantage of the redundancy in the input dataset and discover relations between objects without human supervision or hard-coded rules. Despite the success of dictionary-based techniques on a wide range of tasks in geometric modeling and geometry processing, the literature is missing a principled state-of-the-art of the current knowledge in this field. To fill this gap, we provide in this survey an overview of data-driven dictionary-based methods in geometric modeling. We structure our discussion by application domain: surface reconstruction, compression, and synthesis. Contrary to previous surveys, we place special emphasis on dictionary-based methods suitable for 3D data synthesis, with applications in geometric modeling and design. Our ultimate goal is to enlight the fact that these techniques can be used to combine the data-driven paradigm with design intent to synthesize new plausible objects with minimal human intervention. This is the main motivation to restrict the scope of the present survey to techniques handling point clouds and meshes, making use of dictionaries whose definition depends on the input data, and enabling shape reconstruction or synthesis through the combination of atoms. CCS Concepts •Computing methodologies → Shape modeling; Mesh models; Mesh geometry models; Point-based models; Shape analysis;",
"title": ""
},
{
"docid": "7fd48dcff3d5d0e4bfccc3be67db8c00",
"text": "Criollo cacao (Theobroma cacao ssp. cacao) was cultivated by the Mayas over 1500 years ago. It has been suggested that Criollo cacao originated in Central America and that it evolved independently from the cacao populations in the Amazon basin. Cacao populations from the Amazon basin are included in the second morphogeographic group: Forastero, and assigned to T. cacao ssp. sphaerocarpum. To gain further insight into the origin and genetic basis of Criollo cacao from Central America, RFLP and microsatellite analyses were performed on a sample that avoided mixing pure Criollo individuals with individuals classified as Criollo but which might have been introgressed with Forastero genes. We distinguished these two types of individuals as Ancient and Modern Criollo. In contrast to previous studies, Ancient Criollo individuals formerly classified as ‘wild’, were found to form a closely related group together with Ancient Criollo individuals from South America. The Ancient Criollo trees were also closer to Colombian-Ecuadorian Forastero individuals than these Colombian-Ecuadorian trees were to other South American Forastero individuals. RFLP and microsatellite analyses revealed a high level of homozygosity and significantly low genetic diversity within the Ancient Criollo group. The results suggest that the Ancient Criollo individuals represent the original Criollo group. The results also implies that this group does not represent a separate subspecies and that it probably originated from a few individuals in South America that may have been spread by man within Central America.",
"title": ""
},
{
"docid": "061e91fba7571b8e601b54e1cfc1d71e",
"text": "The training of medical image analysis systems using machine learning approaches follows a common script: collect and annotate a large dataset, train the classifier on the training set, and test it on a hold-out test set. This process bears no direct resemblance with radiologist training, which is based on solving a series of tasks of increasing difficulty, where each task involves the use of significantly smaller datasets than those used in machine learning. In this paper, we propose a novel training approach inspired by how radiologists are trained. In particular, we explore the use of meta-training that models a classifier based on a series of tasks. Tasks are selected using teacher-student curriculum learning, where each task consists of simple classification problems containing small training sets. We hypothesize that our proposed meta-training approach can be used to pre-train medical image analysis models. This hypothesis is tested on the automatic breast screening classification from DCE-MRI trained with weakly labeled datasets. The classification performance achieved by our approach is shown to be the best in the field for that application, compared to state of art baseline approaches: DenseNet, multiple instance learning and multi-task learning.",
"title": ""
},
{
"docid": "9233195d4f25e21a4de1a849d8f47932",
"text": "For the first time, the DRAM device composed of 6F/sup 2/ open-bit-line memory cell with 80nm feature size is developed. Adopting 6F/sup 2/ scheme instead of customary 8F/sup 2/ scheme made it possible to reduce chip size by up to nearly 20%. However, converting the cell scheme to 6F/sup 2/ accompanies some difficulties such as decrease of the cell capacitance, and more compact core layout. To overcome this strict obstacles which are originally stemming from the conversion of cell scheme to 6F/sup 2/, TIT structure with AHO (AfO/AlO/AfO) is adopted for higher cell capacitance, and bar-type contact is adopted for adjusting to compact core layout. Moreover, to lower cell V/sub th/ so far as suitable for characteristic of low power operation, the novel concept, S-RCAT (sphere-shaped-recess-channel-array transistor) is introduced. It is the improved scheme of RCAT used in 8F/sup 2/ scheme. By adopting S-RCAT, V/sub th/ can be lowered, SW, DIBL are improved. Additionally, data retention time characteristic can be improved.",
"title": ""
},
{
"docid": "869889e8be00663e994631b17061479b",
"text": "In this study we approach the problem of distinguishing general profanity from hate speech in social media, something which has not been widely considered. Using a new dataset annotated specifically for this task, we employ supervised classification along with a set of features that includes n-grams, skip-grams and clustering-based word representations. We apply approaches based on single classifiers as well as more advanced ensemble classifiers and stacked generalization, achieving the best result of 80% accuracy for this 3-class classification task. Analysis of the results reveals that discriminating hate speech and profanity is not a simple task, which may require features that capture a deeper understanding of the text not always possible with surface n-grams. The variability of gold labels in the annotated data, due to differences in the subjective adjudications of the annotators, is also an issue. Other directions for future work are discussed.",
"title": ""
},
{
"docid": "e3853e259c3ae6739dcae3143e2074a8",
"text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.",
"title": ""
},
{
"docid": "a6d0c3a9ca6c2c4561b868baa998dace",
"text": "Diprosopus or duplication of the lower lip and mandible is a very rare congenital anomaly. We report this unusual case occurring in a girl who presented to our hospital at the age of 4 months. Surgery and problems related to this anomaly are discussed.",
"title": ""
},
{
"docid": "c9c9af3680df50d4dd72c73c90a41893",
"text": "BACKGROUND\nVideo games provide extensive player involvement for large numbers of children and adults, and thereby provide a channel for delivering health behavior change experiences and messages in an engaging and entertaining format.\n\n\nMETHOD\nTwenty-seven articles were identified on 25 video games that promoted health-related behavior change through December 2006.\n\n\nRESULTS\nMost of the articles demonstrated positive health-related changes from playing the video games. Variability in what was reported about the games and measures employed precluded systematically relating characteristics of the games to outcomes. Many of these games merged the immersive, attention-maintaining properties of stories and fantasy, the engaging properties of interactivity, and behavior-change technology (e.g., tailored messages, goal setting). Stories in video games allow for modeling, vicarious identifying experiences, and learning a story's \"moral,\" among other change possibilities.\n\n\nCONCLUSIONS\nResearch is needed on the optimal use of game-based stories, fantasy, interactivity, and behavior change technology in promoting health-related behavior change.",
"title": ""
},
{
"docid": "6494669dc199660c50e22d4eb62646fb",
"text": "Recent advances in the instrumentation technology of sensory substitution have presented new opportunities to develop systems for compensation of sensory loss. In sensory substitution (e.g. of sight or vestibular function), information from an artificial receptor is coupled to the brain via a human-machine interface. The brain is able to use this information in place of that usually transmitted from an intact sense organ. Both auditory and tactile systems show promise for practical sensory substitution interface sites. This research provides experimental tools for examining brain plasticity and has implications for perceptual and cognition studies more generally.",
"title": ""
},
{
"docid": "13a8cd624d30c0bb022eed43c69af565",
"text": "This paper presents a design procedure of an ultra wideband three section slot coupled hybrid coupler employing a parametric analysis of different design parameters. The coupler configuration is composed of a modified hexagonal shape at the top and bottom conductor plane along with a hexagonal slot at the common ground plane. The coupler performance for different design parameters is studied through full wave simulations. A final design providing a return loss and isolation better than 20dB, an amplitude imbalance between output ports of less than 0.9dB and a phase imbalance of ±1.9° across the 3.1-10.6 GHz band is confirmed.",
"title": ""
},
{
"docid": "1778e5f82da9e90cbddfa498d68e461e",
"text": "Today’s business environment is characterized by fast and unexpected changes, many of which are driven by technological advancement. In such environment, the ability to respond effectively and adapt to the new requirements is not only desirable but essential to survive. Comprehensive and quick understanding of intricacies of market changes facilitates firm’s faster and better response. Two concepts contribute to the success of this scenario; organizational agility and business intelligence (BI). As of today, despite BI’s capabilities to foster organizational agility and consequently improve organizational performance, a clear link between BI and organizational agility has not been established. In this paper we argue that BI solutions have the potential to be facilitators for achieving agility. We aim at showing how BI capabilities can help achieve agility at operational, portfolio, and strategic levels.",
"title": ""
},
{
"docid": "553e476ad6a0081aed01775f995f4d16",
"text": "This document describes the findings of the Second Workshop on Neural Machine Translation and Generation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2018). First, we summarize the research trends of papers presented in the proceedings, and note that there is particular interest in linguistic structure, domain adaptation, data augmentation, handling inadequate resources, and analysis of models. Second, we describe the results of the workshop’s shared task on efficient neural machine translation (NMT), where participants were tasked with creating NMT systems that are both accurate and efficient.",
"title": ""
},
{
"docid": "570e48e839bd2250473d4332adf2b53f",
"text": "Autologous stem cell transplant can be a curative therapy to restore normal hematopoiesis after myeloablative treatments in patients with malignancies. Aim: To evaluate the effect of rehabilitation program for caregivers about patients’ post autologous bone marrow transplantation Research Design: A quasi-experimental design was used. Setting: The study was conducted in Sheikh Zayed Specialized Hospital at Oncology Outpatient Clinic of Bone Marrow Transplantation Unit. Sample: A purposive sample comprised; a total number of 60 patients, their age ranged from 21 to 50 years, free from any other chronic disease and the caregivers are living with the patients in the same home. Tools: Two tools were used for data collection. First tool: An interviewing autologous bone marrow transplantation questionnaire for the patients and their caregivers was divided into five parts; Including: Socio-demographic data, knowledge of caregivers regarding autologous bone marrow transplant and side effect of chemotherapy, family caregivers’ practices according to their providing care related to post bone marrow transplantation, signs and symptoms, activities of daily living for patients and home environmental sanitation for the patients. Second tool: deals with physical examination assessment of the patients from head to toe. Results: 61.7% of patients aged 30˂40 years, and 68.3 % were female. Regarding the type of relationship with the patients, 48.3% were the mother, 58.3% of patients who underwent autologous bone marrow transplantation had a sanitary environment and there were highly statistically significant differences between caregivers’ knowledge and practices pre/post program. Conclusion: There were highly statistically significant differences between family caregivers' total knowledge, their practices, as well as their total caregivers’ knowledge, practices and patients’ independency level pre/post rehabilitation program. . Recommendations: Counseling for family caregivers of patients who underwent autologous bone marrow transplantation and carrying out rehabilitation program for the patients and their caregivers to be performed properly during the rehabilitation period at caner hospitals such as 57357 Hospital and The National Cancer Institute in Cairo.",
"title": ""
},
{
"docid": "1564a94998151d52785dd0429b4ee77d",
"text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.",
"title": ""
}
] |
scidocsrr
|
60cd53823c660a62dc62a36d1925ffab
|
Healthcare Insurance Fraud Detection Leveraging Big Data Analytics
|
[
{
"docid": "a0f8af71421d484cbebb550a0bf59a6d",
"text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.",
"title": ""
}
] |
[
{
"docid": "ac56668cdaad25e9df31f71bc6d64995",
"text": "Hand-crafted illustrations are often more effective than photographs for conveying the shape and important features of an object, but they require expertise and time to produce. We describe an image compositing system and user interface that allow an artist to quickly and easily create technical illustrations from a set of photographs of an object taken from the same point of view under variable lighting conditions. Our system uses a novel compositing process in which images are combined using spatially-varying light mattes, enabling the final lighting in each area of the composite to be manipulated independently. We describe an interface that provides for the painting of local lighting effects (e.g. shadows, highlights, and tangential lighting to reveal texture) directly onto the composite. We survey some of the techniques used in illustration and lighting design to convey the shape and features of objects and describe how our system can be used to apply these techniques.",
"title": ""
},
{
"docid": "f3b76c5ad1841a56e6950f254eda8b17",
"text": "Due to the complexity of human languages, most of sentiment classification algorithms are suffered from a huge-scale dimension of vocabularies which are mostly noisy and redundant. Deep Belief Networks (DBN) tackle this problem by learning useful information in input corpus with their several hidden layers. Unfortunately, DBN is a time-consuming and computationally expensive process for large-scale applications. In this paper, a semi-supervised learning algorithm, called Deep Belief Networks with Feature Selection (DBNFS) is developed. Using our chi-squared based feature selection, the complexity of the vocabulary input is decreased since some irrelevant features are filtered which makes the learning phase of DBN more efficient. The experimental results of our proposed DBNFS shows that the proposed DBNFS can achieve higher classification accuracy and can speed up training time compared with others well-known semi-supervised learning algorithms.",
"title": ""
},
{
"docid": "0e068a4e7388ed456de4239326eb9b08",
"text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.",
"title": ""
},
{
"docid": "b07978a3871f0ba26fd6d1eb568b1b0a",
"text": "This paper presents an intermodulation distortion measurement system based on automated feedforward cancellation that achieves 113 dB of broadband spurious-free dynamic range for discrete tone separations down to 100 Hz. For 1-Hz tone separation, the dynamic range is 106 dB, limited by carrier phase noise. A single-tone cancellation formula is developed requiring only the power of the probing signal and the power of the combined probe and cancellation signal so that the phase shift required for cancellation can be predicted. The technique is applied to a two-path feedforward cancellation system in a bridge configuration. The effects of reflected signals and of group delay on system performance is discussed. Spurious frequency content and interchannel coupling are analyzed with respect to system linearity. Feedforward cancellation and consideration of electromagnetic radiation coupling and reverse-wave isolation effects extends the dynamic range of spectrum and vector analyzers by at least 40 dB. Application of the technique to the measurement of correlated and uncorrelated nonlinear distortion of an amplified wideband code-division multiple-access signal is presented.",
"title": ""
},
{
"docid": "5e9cc7e7933f85b6cffe103c074105d4",
"text": "Substrate-integrated waveguides (SIWs) maintain the advantages of planar circuits (low loss, low profile, easy manufacturing, and integration in a planar circuit board) and improve the quality factor of filter resonators. Empty substrate-integrated waveguides (ESIWs) substantially reduce the insertion losses, because waves propagate through air instead of a lossy dielectric. The first ESIW used a simple tapering transition that cannot be used for thin substrates. A new transition has recently been proposed, which includes a taper also in the microstrip line, not only inside the ESIW, and so it can be used for all substrates, although measured return losses are only 13 dB. In this letter, the cited transition is improved by placing via holes that prevent undesired radiation, as well as two holes that help to ensure good accuracy in the mechanization of the input iris, thus allowing very good return losses (over 20 dB) in the measured results. A design procedure that allows the successful design of the proposed new transition is also provided. A back-to-back configuration of the improved new transition has been successfully manufactured and measured.",
"title": ""
},
{
"docid": "fd94c0639346e760cf2c19aab7847270",
"text": "During the last two decades, a great number of applications for the dc-to-dc converters have been reported [1]. Many applications are found in computers, telecommunications, aeronautics, commercial, and industrial applications. The basic topologies buck, boost, and buck-boost, are widely used in the dc-to-dc conversion. These converters, as well as other converters, provide low voltages and currents for loads at a constant switching frequency. In recent years, there has been a need for wider conversion ratios with a corresponding reduction in size and weight. For example, advances in the field of semiconductors have motivated the development of new integrated circuits, which require 3.3 or 1.5 V power supplies. The automotive industry is moving from 12 V (14 V) to 36 V (42 V), the above is due to the electric-electronic load in automobiles has been growing rapidly and is starting to exceed the practical capacity of present-day electrical systems. Today, the average 12 V (14 V) load is between 750 W to 1 kW, while the peak load can be 2 kW, depending of the type of car and its accessories. By 2005, peak loads above 2 kW, even as high as 12 kW, will be common. To address this challenge, it is widely agreed that a",
"title": ""
},
{
"docid": "edaeccfe6263c1625765574443b79e68",
"text": "The elongated structure of the hippocampus is critically involved in brain functions of profound importance. The segregation of functions along the longitudinal (septotemporal or dorsoventral) axis of the hippocampus is a slowly developed concept and currently is a widely accepted idea. The segregation of neuroanatomical connections along the hippocampal long axis can provide a basis for the interpretation of the functional segregation. However, an emerging and growing body of data strongly suggests the existence of endogenous diversification in the properties of the local neural network along the long axis of the hippocampus. In particular, recent electrophysiological research provides compelling evidence demonstrating constitutively increased network excitability in the ventral hippocampus with important implications for the endogenous initiation and propagation of physiological hippocampal oscillations yet, under favorable conditions it can also drive the local network towards hyperexcitability. In addition, important specializations in the properties of dorsal and ventral hippocampal synapses may support an optimal signal processing that contributes to the effective execution of the distinct functional roles played by the two hippocampal segments.",
"title": ""
},
{
"docid": "f102cc8d3ba32f9a16f522db25143e2d",
"text": "As technology advances man-machine interaction is becoming an unavoidable activity. So an effective method of communication with machines enhances the quality of life. If it is able to operate a system by simply commanding, then it will be a great blessing to the users. Speech is the most effective mode of communication used by humans. So by introducing voice user interfaces the interaction with the machines can be made more user friendly. This paper implements a speaker independent speech recognition system for limited vocabulary Malayalam Words in Raspberry Pi. Mel Frequency Cepstral Coefficients (MFCC) are the features for classification and this paper proposes Radial Basis Function (RBF) kernel in Support Vector Machine (SVM) classifier gives better accuracy in speech recognition than linear kernel. An overall accuracy of 91.8% is obtained with this work.",
"title": ""
},
{
"docid": "9c80e8db09202335f427ebf02659eac3",
"text": "The present paper reviews and critiques studies assessing the relation between sleep patterns, sleep quality, and school performance of adolescents attending middle school, high school, and/or college. The majority of studies relied on self-report, yet the researchers approached the question with different designs and measures. Specifically, studies looked at (1) sleep/wake patterns and usual grades, (2) school start time and phase preference in relation to sleep habits and quality and academic performance, and (3) sleep patterns and classroom performance (e.g., examination grades). The findings strongly indicate that self-reported shortened total sleep time, erratic sleep/wake schedules, late bed and rise times, and poor sleep quality are negatively associated with academic performance for adolescents from middle school through the college years. Limitations of the current published studies are also discussed in detail in this review.",
"title": ""
},
{
"docid": "8fa135e5d01ba2480dea4621ceb1e9f4",
"text": "With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.",
"title": ""
},
{
"docid": "49b0ba019f6f968804608aeacec2a959",
"text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.",
"title": ""
},
{
"docid": "c11b77f1392c79f4a03f9633c8f97f4d",
"text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.",
"title": ""
},
{
"docid": "dcee61dad66f59b2450a3e154726d6b1",
"text": "Mussels are marine organisms that have been mimicked due to their exceptional adhesive properties to all kind of surfaces, including rocks, under wet conditions. The proteins present on the mussel's foot contain 3,4-dihydroxy-l-alanine (DOPA), an amino acid from the catechol family that has been reported by their adhesive character. Therefore, we synthesized a mussel-inspired conjugated polymer, modifying the backbone of hyaluronic acid with dopamine by carbodiimide chemistry. Ultraviolet-visible (UV-vis) spectroscopy and nuclear magnetic resonance (NMR) techniques confirmed the success of this modification. Different techniques have been reported to produce two-dimensional (2D) or three-dimensional (3D) systems capable to support cells and tissue regeneration; among others, multilayer systems allow the construction of hierarchical structures from nano- to macroscales. In this study, the layer-by-layer (LbL) technique was used to produce freestanding multilayer membranes made uniquely of chitosan and dopamine-modified hyaluronic acid (HA-DN). The electrostatic interactions were found to be the main forces involved in the film construction. The surface morphology, chemistry, and mechanical properties of the freestanding membranes were characterized, confirming the enhancement of the adhesive properties in the presence of HA-DN. The MC3T3-E1 cell line was cultured on the surface of the membranes, demonstrating the potential of these freestanding multilayer systems to be used for bone tissue engineering.",
"title": ""
},
{
"docid": "6da745a03e290f312f7cd2960ebe54b8",
"text": "INTRODUCTION\nThe aim of effective clinical handover is seamless transfer of information between care providers. Handover between paramedics and the trauma team provides challenges in ensuring that information loss does not occur. Handover is often time-pressured and paramedics' clinical notes are often delayed in reaching the trauma team. Documentation by trauma team members must be accurate. This study evaluated information loss and discordance as patients were transferred from the scene of an incident to the Trauma Centre.\n\n\nMETHODS\nTwenty-five trauma patients presenting by ambulance to a tertiary Emergency and Trauma Centre were randomly selected. Audiotaped (pre-hospital) and videotaped (in-hospital) handover was compared with written documentation.\n\n\nRESULTS\nIn the pre-hospital setting 171/228 (75%) of data items handed over by paramedics to the trauma team were documented and in the in-hospital handover 335/498 (67%) of information was documented. Information least likely to be documented by trauma team members (1) in the pre-hospital setting related to treatment provided and (2) in the in-hospital setting related to signs and symptoms. While 79% of information was subsequently documented by paramedics, 9% (n=59) of information was not documented either by trauma team members or paramedics and constitutes information loss. Information handed over was not congruent with documentation on seven occasions. Discrepancies included a patient's allergy status and sites of injury (n=2). Demographic details were most likely to be documented but not handed over by paramedics.\n\n\nCONCLUSION\nBy documenting where deficits in handover occur we can identify points of vulnerability and strategies to capture this information.",
"title": ""
},
{
"docid": "6b1a1c36fa583391eb8b142368837bc3",
"text": "In this paper, we present design and simulation of a compact grid array microstrip patch antenna. In the design of antenna a RT/duroid 5880 substrate having relative permittivity, thickness and loss tangent of 2.2, 1.57 mm and 0.0009 respectively, has been used. The simulated antenna performance was obtained by Computer Simulation Technology Microwave Studio (CST MWS). The antenna performance was investigated by analyzing its return loss (S11), radiation pattern, voltage standing wave ratio (VSWR) parameters. The simulated S11 parameter has shown that antenna operates for Industrial, Scientific and Medical (ISM) band and Wireless Body Area Network (WBAN) applications at 2.45 GHZ ISM, 6.25 GHZ, 8.25 GHZ and 10.45 GHZ ultra-wideband (UWB) four resonance frequencies with bandwidth > 500MHz (S11 < −10dB). The antenna directivity increased towards higher frequencies. The VSWR of resonance frequency bands is also achieved succesfully less than 2. It has been observed that the simulation result values of the antenna are suitable for WBAN applications.",
"title": ""
},
{
"docid": "8b77db6a84911c1e4d6eeb6859e16f87",
"text": "As portable electronic devices are widely used in wireless communication, analysis of RF interference becomes an essential step for IC designers. In order to test electromagnetic compatibility (EMC) of IC operating at high frequencies, IC stripline method is proposed in IEC standard. This method can be applied up to 3 GHz and covers the testing of ICs and small component. This paper represents simulation results of the open version of IC stripline in 3D EM solver. Also, the coupling effect of IC stripline method is analyzed with S-parameter results. The distributed lumped-element equivalent model is presented for explaining coupling relation between IC stripline and package. This model can be used for quick analysis for EMC of ICs.",
"title": ""
},
{
"docid": "325bbe7b00513793a1daacdc627f1974",
"text": "Perioperative coagulation management is a complex task that has a significant impact on the perioperative journey of patients. Anaesthesia providers play a critical role in the decision-making on transfusion and/or haemostatic therapy in the surgical setting. Various tests are available in identifying coagulation abnormalities in the perioperative period. While the rapidly available bedside haemoglobin measurements can guide the transfusion of red blood cells, blood product administration is guided by many in vivo and in vitro tests. The introduction of newer anticoagulant medications and the implementation of the modified in vivo coagulation cascade have given a new dimension to the field of perioperative transfusion medicine. A proper understanding of the application and interpretation of the coagulation tests is vital for a good perioperative outcome.",
"title": ""
},
{
"docid": "9f5d77e73fb63235a6e094d437f1be7e",
"text": "An improved zero-voltage and zero-current-switching (ZVZCS) full bridge dc-dc converter is proposed based on phase shift control. With an auxiliary center tapped rectifier at the secondary side, an auxiliary voltage source is applied to reset the primary current of the transformer winding. Therefore, zero-voltage switching for the leading leg switches and zero-current switching for the lagging leg switches can be achieved, respectively, without any increase of current and voltage stresses. Since the primary current in the circulating interval for the phase shift full bridge converter is eliminated, the conduction loss in primary switches is reduced. A 1 kW prototype is made to verify the theoretical analysis.",
"title": ""
},
{
"docid": "390e9e2bfb8e94d70d1dbcfbede6dd46",
"text": "Modern software-based services are implemented as distributed systems with complex behavior and failure modes. Many large tech organizations are using experimentation to verify such systems' reliability. Netflix engineers call this approach chaos engineering. They've determined several principles underlying it and have used it to run experiments. This article is part of a theme issue on DevOps.",
"title": ""
},
{
"docid": "8a20feb22ce8797fa77b5d160919789c",
"text": "We proposed the concept of hardware software co-simulation for image processing using Xilinx system generator. Recent advances in synthesis tools for SIMULINK suggest a feasible high-level approach to algorithm implementation for embedded DSP systems. An efficient FPGA based hardware design for enhancement of color and grey scale images in image and video processing. The top model – based visual development process of SIMULINK facilitates host side simulation and validation, as well as synthesis of target specific code, furthermore, legacy code written in MATLAB or ANCI C can be reuse in custom blocks. However, the code generated for DSP platforms is often not very efficient. We are implemented the Image processing applications on FPGA it can be easily design.",
"title": ""
}
] |
scidocsrr
|
f8a8f28015bc1794573d988f067cc1e4
|
Crowdsourced semantic annotation of scientific publications and tabular data in PDF
|
[
{
"docid": "ffc09744f2668e52ce84ac28887fd5fe",
"text": "As the number of research papers available on the Web has increased enormously over the years, paper recommender systems have been proposed to help researchers on automatically finding works of interest. The main problem with the current approaches is that they assume that recommending algorithms are provided with a rich set of evidence (e.g., document collections, citations, profiles) which is normally not widely available. In this paper we propose a novel source independent framework for research paper recommendation. The framework requires as input only a single research paper and generates several potential queries by using terms in that paper, which are then submitted to existing Web information sources that hold research papers. Once a set of candidate papers for recommendation is generated, the framework applies content-based recommending algorithms to rank the candidates in order to recommend the ones most related to the input paper. This is done by using only publicly available metadata (i.e., title and abstract). We evaluate our proposed framework by performing an extensive experimentation in which we analyzed several strategies for query generation and several ranking strategies for paper recommendation. Our results show that good recommendations can be obtained with simple and low cost strategies.",
"title": ""
},
{
"docid": "9eea7c3b36bf91ae439e84a051a190bb",
"text": "Recently practical approaches for managing and supporting the life-cycle of semantic content on the Web of Data made quite some progress. However, the currently least developed aspect of the semantic content life-cycle is the user-friendly manual and semi-automatic creation of rich semantic content. In this paper we present the RDFaCE approach for combining WYSIWYG text authoring with the creation of rich semantic annotations. Our approach is based on providing four different views to the content authors: a classical WYSIWYG view, a WYSIWYM (What You See Is What You Mean) view making the semantic annotations visible, a fact view and the respective HTML/RDFa source code view. The views are synchronized such that changes made in one of the views automatically update the others. They provide different means of semantic content authoring for the different personas involved in the content creation life-cycle. For bootstrapping the semantic annotation process we integrate five different text annotation services. We evaluate their accuracy and empirically show that a combination of them yields superior results.",
"title": ""
}
] |
[
{
"docid": "94ea3cbf3df14d2d8e3583cb4714c13f",
"text": "The emergence of computers as an essential tool in scientific research has shaken the very foundations of differential modeling. Indeed, the deeply-rooted abstraction of smoothness, or differentiability, seems to inherently clash with a computer's ability of storing only finite sets of numbers. While there has been a series of computational techniques that proposed discretizations of differential equations, the geometric structures they are supposed to simulate are often lost in the process.",
"title": ""
},
{
"docid": "3f1a2efdff6be4df064f3f5b978febee",
"text": "D-galactose injection has been shown to induce many changes in mice that represent accelerated aging. This mouse model has been widely used for pharmacological studies of anti-aging agents. The underlying mechanism of D-galactose induced aging remains unclear, however, it appears to relate to glucose and 1ipid metabolic disorders. Currently, there has yet to be a study that focuses on investigating gene expression changes in D-galactose aging mice. In this study, integrated analysis of gas chromatography/mass spectrometry-based metabonomics and gene expression profiles was used to investigate the changes in transcriptional and metabolic profiles in mimetic aging mice injected with D-galactose. Our findings demonstrated that 48 mRNAs were differentially expressed between control and D-galactose mice, and 51 potential biomarkers were identified at the metabolic level. The effects of D-galactose on aging could be attributed to glucose and 1ipid metabolic disorders, oxidative damage, accumulation of advanced glycation end products (AGEs), reduction in abnormal substance elimination, cell apoptosis, and insulin resistance.",
"title": ""
},
{
"docid": "b72f4554f2d7ac6c5a8000d36a099e67",
"text": "Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T1. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.",
"title": ""
},
{
"docid": "9042a72bc42bdfd3b2f2a1fc6145b7f1",
"text": "In this paper we introduce a framework for learning from RDF data using graph kernels that count substructures in RDF graphs, which systematically covers most of the existing kernels previously defined and provides a number of new variants. Our definitions include fast kernel variants that are computed directly on the RDF graph. To improve the performance of these kernels we detail two strategies. The first strategy involves ignoring the vertex labels that have a low frequency among the instances. Our second strategy is to remove hubs to simplify the RDF graphs. We test our kernels in a number of classification experiments with real-world RDF datasets. Overall the kernels that count subtrees show the best performance. However, they are closely followed by simple bag of labels baseline kernels. The direct kernels substantially decrease computation time, while keeping performance the same. For the walks counting kernel the decrease in computation time of the approximation is so large that it thereby becomes a computationally viable kernel to use. Ignoring low frequency labels improves the performance for all datasets. The hub removal algorithm increases performance on two out of three of our smaller datasets, but has little impact when used on our larger datasets.",
"title": ""
},
{
"docid": "38ecb51f7fca71bd47248987866a10d2",
"text": "Machine Translation has been a topic of research from the past many years. Many methods and techniques have been proposed and developed. However, quality of translation has always been a matter of concern. In this paper, we outline a target language generation mechanism with the help of language English-Sanskrit language pair using rule based machine translation technique [1]. Rule Based Machine Translation provides high quality translation and requires in depth knowledge of the language apart from real world knowledge and the differences in cultural background and conceptual divisions. A string of English sentence can be translated into string of Sanskrit ones. The methodology for design and development is implemented in the form of software named as “EtranS”. KeywordsAnalysis, Machine translation, translation theory, Interlingua, language divergence, Sanskrit, natural language processing.",
"title": ""
},
{
"docid": "eaead3c8ac22ff5088222bb723d8b758",
"text": "Discrete-Time Markov Chains (DTMCs) are a widely-used formalism to model probabilistic systems. On the one hand, available tools like PRISM or MRMC offer efficient model checking algorithms and thus support the verification of DTMCs. However, these algorithms do not provide any diagnostic information in the form of counterexamples, which are highly important for the correction of erroneous systems. On the other hand, there exist several approaches to generate counterexamples for DTMCs, but all these approaches require the model checking result for completeness. In this paper we introduce a model checking algorithm for DTMCs that also supports the generation of counterexamples. Our algorithm, based on the detection and abstraction of strongly connected components, offers abstract counterexamples, which can be interactively refined by the user.",
"title": ""
},
{
"docid": "f8527ea496666ef875805d376fbd2d5d",
"text": "The rapid development of computer and robotic technologies in the last decade is giving hope to perform earlier and more accurate diagnoses of the Autism Spectrum Disorder (ASD), and more effective, consistent, and cost-conscious treatment. Besides the reduced cost, the main benefit of using technology to facilitate treatment is that stimuli produced during each session of the treatment can be controlled, which not only guarantees consistency across different sessions, but also makes it possible to focus on a single phenomenon, which is difficult even for a trained professional to perform, and deliver the stimuli according to the treatment plan. In this article, we provide a comprehensive review of research on recent technology-facilitated diagnosis and treat of children and adults with ASD. Different from existing reviews on this topic, which predominantly concern clinical issues, we focus on the engineering perspective of autism studies. All technology facilitated systems used for autism studies can be modeled as human machine interactive systems where one or more participants would constitute as the human component, and a computer-based or a robotic-based system would be the machine component. Based on this model, we organize our review with the following questions: (1) What are presented to the participants in the studies and how are the content and delivery methods enabled by technologies? (2) How are the reactions/inputs collected from the participants in response to the stimuli in the studies? (3) Are the experimental procedure and programs presented to participants dynamically adjustable based on the responses from the participants, and if so, how? and (4) How are the programs assessed?",
"title": ""
},
{
"docid": "5b149ce093d0e546a3e99f92ef1608a0",
"text": "Smartphones have been becoming ubiquitous and mobile users are increasingly relying on them to store and handle personal information. However, recent studies also reveal the disturbing fact that users’ personal information is put at risk by (rogue) smartphone applications. Existing solutions exhibit limitations in their capabilities in taming these privacy-violating smartphone applications. In this paper, we argue for the need of a new privacy mode in smartphones. The privacy mode can empower users to flexibly control in a fine-grained manner what kinds of personal information will be accessible to an application. Also, the granted access can be dynamically adjusted at runtime in a fine-grained manner to better suit a user’s needs in various scenarios (e.g., in a different time or location). We have developed a system called TISSA that implements such a privacy mode on Android. The evaluation with more than a dozen of information-leaking Android applications demonstrates its effectiveness and practicality. Furthermore, our evaluation shows that TISSA introduces negligible performance overhead.",
"title": ""
},
{
"docid": "0e5f4253ea4fba9c9c42dd579cbba76c",
"text": "Binary code search has received much attention recently due to its impactful applications, e.g., plagiarism detection, malware detection and software vulnerability auditing. However, developing an effective binary code search tool is challenging due to the gigantic syntax and structural differences in binaries resulted from different compilers, architectures and OSs. In this paper, we propose BINGO — a scalable and robust binary search engine supporting various architectures and OSs. The key contribution is a selective inlining technique to capture the complete function semantics by inlining relevant library and user-defined functions. In addition, architecture and OS neutral function filtering is proposed to dramatically reduce the irrelevant target functions. Besides, we introduce length variant partial traces to model binary functions in a program structure agnostic fashion. The experimental results show that BINGO can find semantic similar functions across architecture and OS boundaries, even with the presence of program structure distortion, in a scalable manner. Using BINGO, we also discovered a zero-day vulnerability in Adobe PDF Reader, a COTS binary.",
"title": ""
},
{
"docid": "6b04fddb55b413306c0706642c81c621",
"text": "With the proliferation of the Internet and World Wide Web applications, people are increasingly interacting with government to citizen (G2C) e-government systems. It is, therefore, important to measure the success of G2C e-government systems from citizens’ perspective. While information systems (IS) success models have received much attention among researchers, little research has been conducted to assess the success of e-government systems. Whether traditional IS success models can be extended to investigating e-government systems success needs to be addressed. This study provides the first empirical test of an adaptation of DeLone and McLean’s IS success model in the context of G2C e-government. The model consists of six dimensions: Information Quality, System Quality, Service Quality, Use, User Satisfaction, and Perceived Net Benefit. Structural equation modeling techniques were applied to data collected by questionnaire from 119 users of G2C e-government systems in Taiwan. Except the link from System Quality to Use, the hypothesized relationships between the six success variables were significantly or marginally supported by the data. The findings of this study provide several important implications for e-government research and practice. This paper concludes by discussing limitations that could be addressed in future studies.",
"title": ""
},
{
"docid": "bd320ffcd9c28e2c3ea2d69039bfdbe9",
"text": "3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed.",
"title": ""
},
{
"docid": "a95ca56f64150700cd899a5b0ee1c4b8",
"text": "Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ’witness’ of a criminal event. Forensic investigations include recovery, analysis and presentation of information stored in digital devices and related to computer crimes. These activities often involve the adoption of a wide range of imaging and analysis tools and the application of different techniques on different devices, with the consequence that the reconstruction and presentation activities result complicated. This work presents a method, based on Semantic Web technologies, that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstruction of events or actions in order to reach case conclusions.",
"title": ""
},
{
"docid": "8292d5c1e13042aa42f1efb60058ef96",
"text": "The epithelial-to-mesenchymal transition (EMT) is a vital control point in metastatic breast cancer (MBC). TWIST1, SNAIL1, SLUG, and ZEB1, as key EMT-inducing transcription factors (EMT-TFs), are involved in MBC through different signaling cascades. This updated meta-analysis was conducted to assess the correlation between the expression of EMT-TFs and prognostic value in MBC patients. A total of 3,218 MBC patients from fourteen eligible studies were evaluated. The pooled hazard ratios (HR) for EMT-TFs suggested that high EMT-TF expression was significantly associated with poor prognosis in MBC patients (HRs = 1.72; 95% confidence intervals (CIs) = 1.53-1.93; P = 0.001). In addition, the overexpression of SLUG was the most impactful on the risk of MBC compared with TWIST1 and SNAIL1, which sponsored fixed models. Strikingly, the increased risk of MBC was less associated with ZEB1 expression. However, the EMT-TF expression levels significantly increased the risk of MBC in the Asian population (HR = 2.11, 95% CI = 1.70-2.62) without any publication bias (t = 1.70, P = 0.11). These findings suggest that the overexpression of potentially TWIST1, SNAIL1 and especially SLUG play a key role in the aggregation of MBC treatment as well as in the improvement of follow-up plans in Asian MBC patients.",
"title": ""
},
{
"docid": "39ab78b58f6ace0fc29f18a1c4ed8ebc",
"text": "We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.",
"title": ""
},
{
"docid": "5a0cfbd3d8401d4d8e437ec1a1e9458f",
"text": "Ehlers-Danlos syndrome is an inherited heterogeneous group of connective tissue disorders, characterized by abnormal collagen synthesis, affecting skin, ligaments, joints, blood vessels and other organs. It is one of the oldest known causes of bruising and bleeding and was first described by Hipprocrates in 400 BC. Edvard Ehlers, in 1901, recognized the condition as a distinct entity. In 1908, Henri-Alexandre Danlos suggested that skin extensibility and fragility were the cardinal features of the syndrome. In 1998, Beighton published the classification of Ehlers-Danlos syndrome according to the Villefranche nosology. From the 1960s the genetic make up was identified. Management of bleeding problems associated with Ehlers-Danlos has been slow to progress.",
"title": ""
},
{
"docid": "0bd96a4b417b3482a6accac0f7f927ca",
"text": "“Little languages” such as configuration files or HTML documents are commonplace in computing. This paper divides the work of implementing a little language into four parts, and presents a framework which can be used to easily conquer the implementation of each. The pieces of the framework have the unusual property that they may be extended through normal object-oriented means, allowing features to be added to a little language simply by subclassing parts of its compiler.",
"title": ""
},
{
"docid": "6cc99565a0e9081a94e82be93a67482e",
"text": "The existing shortage of therapists and caregivers assisting physically disabled individuals at home is expected to increase and become serious problem in the near future. The patient population needing physical rehabilitation of the upper extremity is also constantly increasing. Robotic devices have the potential to address this problem as noted by the results of recent research studies. However, the availability of these devices in clinical settings is limited, leaving plenty of room for improvement. The purpose of this paper is to document a review of robotic devices for upper limb rehabilitation including those in developing phase in order to provide a comprehensive reference about existing solutions and facilitate the development of new and improved devices. In particular the following issues are discussed: application field, target group, type of assistance, mechanical design, control strategy and clinical evaluation. This paper also includes a comprehensive, tabulated comparison of technical solutions implemented in various systems.",
"title": ""
},
{
"docid": "886df1aff444a120bd56a85fa4f53472",
"text": "Acoustic Event Classification (AEC) has become a significant task for machines to perceive the surrounding auditory scene. However, extracting effective representations that capture the underlying characteristics of the acoustic events is still challenging. Previous methods mainly focused on designing the audio features in a ‘hand-crafted’ manner. Interestingly, data-learnt features have been recently reported to show better performance. Up to now, these were only considered on the frame-level. In this paper, we propose an unsupervised learning framework to learn a vector representation of an audio sequence for AEC. This framework consists of a Recurrent Neural Network (RNN) encoder and a RNN decoder, which respectively transforms the variable-length audio sequence into a fixed-length vector and reconstructs the input sequence on the generated vector. After training the encoder-decoder, we feed the audio sequences to the encoder and then take the learnt vectors as the audio sequence representations. Compared with previous methods, the proposed method can not only deal with the problem of arbitrary-lengths of audio streams, but also learn the salient information of the sequence. Extensive evaluation on a large-size acoustic event database is performed, and the empirical results demonstrate that the learnt audio sequence representation yields a significant performance improvement by a large margin compared with other state-of-the-art hand-crafted sequence features for AEC.",
"title": ""
},
{
"docid": "9f84ec96cdb45bcf333db9f9459a3d86",
"text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 × 2 and 2 × 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.",
"title": ""
},
{
"docid": "97cc1bbb077bb11613299b0c829eee39",
"text": "Field Programmable Gate Array (FPGA) implementations of sorting algorithms have proven to be efficient, but existing implementations lack portability and maintainability because they are written in low-level hardware description languages that require substantial domain expertise to develop and maintain. To address this problem, we develop a framework that generates sorting architectures for different requirements (speed, area, power, etc.). Our framework provides ten highly optimized basic sorting architectures, easily composes basic architectures to generate hybrid sorting architectures, enables non-hardware experts to quickly design efficient hardware sorters, and facilitates the development of customized heterogeneous FPGA/CPU sorting systems. Experimental results show that our framework generates architectures that perform at least as well as existing RTL implementations for arrays smaller than 16K elements, and are comparable to RTL implementations for sorting larger arrays. We demonstrate a prototype of an end-to-end system using our sorting architectures for large arrays (16K-130K) on a heterogeneous FPGA/CPU system.",
"title": ""
}
] |
scidocsrr
|
13d78c0927444d2f6528c8d31fefb8dd
|
Deep Reinforcement Learning for Autonomous Driving
|
[
{
"docid": "9984fc080b1f2fe2bf4910b9091591a7",
"text": "In the modern era, the vehicles are focused to be automated to give human driver relaxed driving. In the field of automobile various aspects have been considered which makes a vehicle automated. Google, the biggest network has started working on the self-driving cars since 2010 and still developing new changes to give a whole new level to the automated vehicles. In this paper we have focused on two applications of an automated car, one in which two vehicles have same destination and one knows the route, where other don't. The following vehicle will follow the target (i.e. Front) vehicle automatically. The other application is automated driving during the heavy traffic jam, hence relaxing driver from continuously pushing brake, accelerator or clutch. The idea described in this paper has been taken from the Google car, defining the one aspect here under consideration is making the destination dynamic. This can be done by a vehicle automatically following the destination of another vehicle. Since taking intelligent decisions in the traffic is also an issue for the automated vehicle so this aspect has been also under consideration in this paper.",
"title": ""
},
{
"docid": "8665711daa00dac270ed0830e43acdde",
"text": "Deep learning-based approaches have been widely used for training controllers for autonomous vehicles due to their powerful ability to approximate nonlinear functions or policies. However, the training process usually requires large labeled data sets and takes a lot of time. In this paper, we analyze the influences of features on the performance of controllers trained using the convolutional neural networks (CNNs), which gives a guideline of feature selection to reduce computation cost. We collect a large set of data using The Open Racing Car Simulator (TORCS) and classify the image features into three categories (sky-related, roadside-related, and road-related features). We then design two experimental frameworks to investigate the importance of each single feature for training a CNN controller. The first framework uses the training data with all three features included to train a controller, which is then tested with data that has one feature removed to evaluate the feature's effects. The second framework is trained with the data that has one feature excluded, while all three features are included in the test data. Different driving scenarios are selected to test and analyze the trained controllers using the two experimental frameworks. The experiment results show that (1) the road-related features are indispensable for training the controller, (2) the roadside-related features are useful to improve the generalizability of the controller to scenarios with complicated roadside information, and (3) the sky-related features have limited contribution to train an end-to-end autonomous vehicle controller.",
"title": ""
},
{
"docid": "be283056a8db3ab5b2481f3dc1f6526d",
"text": "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.",
"title": ""
},
{
"docid": "b1e4fb97e4b1d31e4064f174e50f17d3",
"text": "We propose an inverse reinforcement learning (IRL) approach using Deep QNetworks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.",
"title": ""
},
{
"docid": "03097e1239e5540fe1ec45729d1cbbc2",
"text": "Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQ’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQ. In particular, we tested PGQ on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.",
"title": ""
}
] |
[
{
"docid": "375470d901a7d37698d34747621667ce",
"text": "RNA interference (RNAi) has recently emerged as a specific and efficient method to silence gene expression in mammalian cells either by transfection of short interfering RNAs (siRNAs; ref. 1) or, more recently, by transcription of short hairpin RNAs (shRNAs) from expression vectors and retroviruses. But the resistance of important cell types to transduction by these approaches, both in vitro and in vivo, has limited the use of RNAi. Here we describe a lentiviral system for delivery of shRNAs into cycling and non-cycling mammalian cells, stem cells, zygotes and their differentiated progeny. We show that lentivirus-delivered shRNAs are capable of specific, highly stable and functional silencing of gene expression in a variety of cell types and also in transgenic mice. Our lentiviral vectors should permit rapid and efficient analysis of gene function in primary human and animal cells and tissues and generation of animals that show reduced expression of specific genes. They may also provide new approaches for gene therapy.",
"title": ""
},
{
"docid": "a99e30d406d5053d8345b36791899238",
"text": "Advances in sequencing technologies and increased access to sequencing services have led to renewed interest in sequence and genome assembly. Concurrently, new applications for sequencing have emerged, including gene expression analysis, discovery of genomic variants and metagenomics, and each of these has different needs and challenges in terms of assembly. We survey the theoretical foundations that underlie modern assembly and highlight the options and practical trade-offs that need to be considered, focusing on how individual features address the needs of specific applications. We also review key software and the interplay between experimental design and efficacy of assembly.",
"title": ""
},
{
"docid": "356a72153f61311546f6ff874ee79bb4",
"text": "In this paper, an object cosegmentation method based on shape conformability is proposed. Different from the previous object cosegmentation methods which are based on the region feature similarity of the common objects in image set, our proposed SaCoseg cosegmentation algorithm focuses on the shape consistency of the foreground objects in image set. In the proposed method, given an image set where the implied foreground objects may be varied in appearance but share similar shape structures, the implied common shape pattern in the image set can be automatically mined and regarded as the shape prior of those unsatisfactorily segmented images. The SaCoseg algorithm mainly consists of four steps: 1) the initial Grabcut segmentation; 2) the shape mapping by coherent point drift registration; 3) the common shape pattern discovery by affinity propagation clustering; and 4) the refinement by Grabcut with common shape constraint. To testify our proposed algorithm and establish a benchmark for future work, we built the CoShape data set to evaluate the shape-based cosegmentation. The experiments on CoShape data set and the comparison with some related cosegmentation algorithms demonstrate the good performance of the proposed SaCoseg algorithm.",
"title": ""
},
{
"docid": "0616a6a220d117f00cc97526f3e493c5",
"text": "To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them. In this chapter, we describe the Alexey Kurakin, Ian Goodfellow, Samy Bengio Google Brain Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu Department of Computer Science and Technology, Tsinghua University Cihang Xie, Zhishuai Zhang, Alan Yuille Department of Computer Science, The Johns Hopkins University Jianyu Wang Baidu Research USA",
"title": ""
},
{
"docid": "7efa3543711bc1bb6e3a893ed424b75d",
"text": "This dissertation is concerned with the creation of training data and the development of probability models for statistical parsing of English with Combinatory Categorial Grammar (CCG). Parsing, or syntactic analysis, is a prerequisite for semantic interpretation, and forms therefore an integral part of any system which requires natural language understanding. Since almost all naturally occurring sentences are ambiguous, it is not sufficient (and often impossible) to generate all possible syntactic analyses. Instead, the parser needs to rank competing analyses and select only the most likely ones. A statistical parser uses a probability model to perform this task. I propose a number of ways in which such probability models can be defined for CCG. The kinds of models developed in this dissertation, generative models over normal-form derivation trees, are particularly simple, and have the further property of restricting the set of syntactic analyses to those corresponding to a canonical derivation structure. This is important to guarantee that parsing can be done efficiently. In order to achieve high parsing accuracy, a large corpus of annotated data is required to estimate the parameters of the probability models. Most existing wide-coverage statistical parsers use models of phrase-structure trees estimated from the Penn Treebank, a 1-million-word corpus of manually annotated sentences from the Wall Street Journal. This dissertation presents an algorithm which translates the phrase-structure analyses of the Penn Treebank to CCG derivations. The resulting corpus, CCGbank, is used to train and test the models proposed in this dissertation. Experimental results indicate that parsing accuracy (when evaluated according to a comparable metric, the recovery of unlabelled word-word dependency relations), is as high as that of standard Penn Treebank parsers which use similar modelling techniques. Most existing wide-coverage statistical parsers use simple phrase-structure grammars whose syntactic analyses fail to capture long-range dependencies, and therefore do not correspond to directly interpretable semantic representations. By contrast, CCG is a grammar formalism in which semantic representations that include long-range dependencies can be built directly during the derivation of syntactic structure. These dependencies define the predicate-argument structure of a sentence, and are used for two purposes in this dissertation: First, the performance of the parser can be evaluated according to how well it recovers these dependencies. In contrast to purely syntactic evaluations, this yields a direct measure of how accurate the semantic interpretations returned by the parser are. Second, I propose a generative model that captures the local and non-local dependencies in the predicate-argument structure, and investigate the impact of modelling non-local in addition to local dependencies.",
"title": ""
},
{
"docid": "0d59ab6748a16bf4deedfc8bd79e4d71",
"text": "Paget's disease (PD) is a chronic progressive disease of the bone characterized by abnormal bone metabolism affecting either a single bone (monostotic) or many bones (polyostotic) with uncertain etiology. We report a case of PD in a 70-year-old male, which was initially identified as osteonecrosis of the maxilla. Non-drug induced osteonecrosis in PD is rare and very few cases have been reported in the literature.",
"title": ""
},
{
"docid": "4a5abe07b93938e7549df068967731fc",
"text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.",
"title": ""
},
{
"docid": "749dd1398938c5517858384c616ecaff",
"text": "An asymmetric wideband dual-polarized bilateral tapered slot antenna (DBTSA) is proposed in this letter for wireless EMC measurements. The DBTSA is formed by two bilateral tapered slot antennas designed with low cross polarization. With careful design, the achieved DBTSA not only has a wide operating frequency band, but also maintains a single main-beam from 700 MHz to 20 GHz. This is a significant improvement compared to the conventional dual-polarized tapered slot antennas, which suffer from main-beam split in the high-frequency band. The innovative asymmetric configuration of the proposed DBTSA significantly reduces the field coupling between the two antenna elements, so that low cross polarization and high port isolation are obtained across the entire frequency range. All these intriguing characteristics make the proposed DBTSA a good candidate for a dual-polarized sensor antenna for wireless EMC measurements.",
"title": ""
},
{
"docid": "e63eac157bd750ca39370fd5b9fdf85e",
"text": "Allometric scaling relations, including the 3/4 power law for metabolic rates, are characteristic of all organisms and are here derived from a general model that describes how essential materials are transported through space-filling fractal networks of branching tubes. The model assumes that the energy dissipated is minimized and that the terminal tubes do not vary with body size. It provides a complete analysis of scaling relations for mammalian circulatory systems that are in agreement with data. More generally, the model predicts structural and functional properties of vertebrate cardiovascular and respiratory systems, plant vascular systems, insect tracheal tubes, and other distribution networks.",
"title": ""
},
{
"docid": "9d068f6b812272750fe8a56562d703a2",
"text": "Sustainable development, although a widely used phrase and idea, has many different meanings and therefore provokes many different responses. In broad terms, the concept of sustainable development is an attempt to combine growing concerns about a range of environmental issues with socio-economic issues. To aid understanding of these different policies this paper presents a classification and mapping of different trends of thought on sustainable development, their political and policy frameworks and their attitudes towards change and means of change. Sustainable development has the potential to address fundamental challenges for humanity, now and into the future. However, to do this, it needs more clarity of meaning, concentrating on sustainable livelihoods and well-being rather than well-having, and long term environmental sustainability, which requires a strong basis in principles that link the social and environmental to human equity. Copyright © 2005 John Wiley & Sons, Ltd and ERP Environment. Received 31 July 2002; revised 16 October 2003; accepted 3 December 2003 Sustainable Development: A Challenging and Contested Concept T HE WIDESPREAD RISE OF INTEREST IN, AND SUPPORT FOR, THE CONCEPT OF SUSTAINABLE development is potentially an important shift in understanding relationships of humanity with nature and between people. It is in contrast to the dominant outlook of the last couple of hundred years, especially in the ‘North’, that has been based on the view of the separation of the environment from socio-economic issues. For most of the last couple of hundred years the environment has been largely seen as external to humanity, mostly to be used and exploited, with a few special areas preserved as wilderness or parks. Environmental problems were viewed mainly as local. On the whole the relationship between people and the environment was conceived as humanity’s triumph over nature. This Promethean view (Dryzek, 1997) was that human knowledge and technology could overcome all obstacles including natural and environmental ones. This view was linked with the development of capitalism, the industrial revolution and modern science. As Bacon, one of the founders of modern science, put it, ‘The world is made for * Correspondence to: Bill Hopwood, Sustainable Cities Research Institute, 6 North Street East, University of Northumbria, Newcastle on Tyne NE1 8ST, UK. E-mail: william.hopwood@unn.ac.uk Sustainable Development Sust. Dev. 13, 38–52 (2005) Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/sd.244 Mapping Different Approaches 39 man, not man for the world’. Environmental management and concern amongst most businesses and governments, apart from local problems and wilderness conservation, was at best based on natural resource management. A key example was the ideas of Pinchot in the USA (Dryzek, 1997), which recognized that humans do need natural resources and that these resources should be managed, rather than rapidly exploited, in order to ensure maximum long-term use. Economics came to be the dominating issue of human relations with economic growth, defined by increasing production, as the main priority (Douthwaite, 1992). This was the seen as the key to humanity’s well-being and, through growth, poverty would be overcome: as everyone floated higher those at the bottom would be raised out of poverty. The concept of sustainable development is the result of the growing awareness of the global links between mounting environmental problems, socio-economic issues to do with poverty and inequality and concerns about a healthy future for humanity. It strongly links environmental and socio-economic issues. The first important use of the term was in 1980 in the World Conservation Strategy (IUCN et al., 1980). This process of bringing together environmental and socio-economic questions was most famously expressed in the Brundtland Report’s definition of sustainable development as meeting ‘the needs of the present without compromising the ability of future generations to meet their needs’ (WCED, 1987, p. 43). This defines needs from a human standpoint; as Lee (2000, p. 32) has argued, ‘sustainable development is an unashamedly anthropocentric concept’. Brundtland’s definition and the ideas expressed in the report Our Common Future recognize the dependency of humans on the environment to meet needs and well-being in a much wider sense than merely exploiting resources: ‘ecology and economy are becoming ever more interwoven – locally, regionally, nationally and globally’ (WCED, 1987, p. 5). Rather than domination over nature our lives, activities and society are nested within the environment (Giddings et al., 2002). The report stresses that humanity, whether in an industrialized or a rural subsistence society, depends for security and basic existence on the environment; the economy and our well-being now and in the future need the environment. It also points to the planetwide interconnections: environmental problems are not local but global, so that actions and impacts have to be considered internationally to avoid displacing problems from one area to another by actions such as releasing pollution that crosses boundaries, moving polluting industries to another location or using up more than an equitable share of the earth’s resources (by an ecological footprint (Wackernagel and Rees, 1996) far in excess of the area inhabited). Environmental problems threaten people’s health, livelihoods and lives and can cause wars and threaten future generations. Sustainable development raises questions about the post-war claim, that still dominates much mainstream economic policy, that international prosperity and human well-being can be achieved through increased global trade and industry (Reid, 1995; Moffat, 1996; Sachs, 1999). It recognizes that past growth models have failed to eradicate poverty globally or within countries, ‘no trends, . . . no programmes or policies offer any real hope of narrowing the growing gap between rich and poor nations’ (WCED, 1987, p. xi). This pattern of growth has also damaged the environment upon which we depend, with a ‘downward spiral of poverty and environmental degradation’ (WCED, 1987, p. xii). Brundtland, recognizing this failure, calls for a different form of growth, ‘changing the quality of growth, meeting essential needs, merging environment and economics in decision making’ (WCED, 1987, p. 49), with an emphasis on human development, participation in decisions and equity in benefits. The development proposed is a means to eradicate poverty, meet human needs and ensure that all get a fair share of resources – very different from present development. Social justice today and in the future is a crucial component of the concept of sustainable development. There were, and are, long standing debates about both goals and means within theories dealing with both environmental and socio-economic questions which have inevitably flowed into ideas on sustainCopyright © 2005 John Wiley & Sons, Ltd and ERP Environment Sust. Dev. 13, 38–52 (2005) 40 B. Hopwood et al. able development. As Wackernagel and Rees (1996) have argued, the Brundtland Report attempted to bridge some of these debates by leaving a certain ambiguity, talking at the same time of the priorities of meeting the needs of the poor, protecting the environment and more rapid economic growth. The looseness of the concept and its theoretical underpinnings have enabled the use of the phrases ‘sustainable development’ and ‘sustainability’ to become de rigueur for politicians and business leaders, but as the Workshop on Urban Sustainability of the US National Science Foundation (2000, p. 1) pointed out, sustainability is ‘laden with so many definitions that it risks plunging into meaninglessness, at best, and becoming a catchphrase for demagogy, at worst. [It] is used to justify and legitimate a myriad of policies and practices ranging from communal agrarian utopianism to large-scale capital-intensive market development’. While many claim that sustainable development challenges the increased integration of the world in a capitalist economy dominated by multinationals (Middleton et al., 1993; Christie and Warburton, 2001), Brundtland’s ambiguity allows business and governments to be in favour of sustainability without any fundamental challenge to their present course, using Brundtland’s support for rapid growth to justify the phrase ‘sustainable growth’. Rees (1998) points out that this allows capitalism to continue to put forward economic growth as its ‘morally bankrupt solution’ to poverty. If the economy grows, eventually all will benefit (Dollar and Kraay, 2000): in modern parlance the trickle-down theory. Daly (1993) criticized the notion of ‘sustainable growth’ as ‘thought-stopping’ and oxymoronic in a world in which ecosystems are finite. At some point, economic growth with ever more use of resources and production of waste is unsustainable. Instead Daly argued for the term ‘sustainable development’ by which he, much more clearly than Brundtland, meant qualitative, rather than quantitative, improvements. Development is open to confusion, with some seeing it as an end in itself, so it has been suggested that greater clarity would be to speak of ‘sustainable livelihoods’, which is the aim that Brundtland outlined (Workshop on Urban Sustainability, 2000). Another area of debate is between the views of weak and strong sustainability (Haughton and Hunter, 1994). Weak sustainability sees natural and manufactured capital as interchangeable with technology able to fill human produced gaps in the natural world (Daly and Cobb, 1989) such as a lack of resources or damage to the environment. Solow put the case most strongly, stating that by substituting other factors for natural resources ‘the world can, in effect, get along without natural resources, so exhaustion is just an event, not a catastrophe’ (1974, p. 11). Strong ",
"title": ""
},
{
"docid": "ddf4e9582bc1b86ca8cb9967c4247e8e",
"text": "In the past few years, Iranian universities have embarked to use e-learning tools and technologies to extend and improve their educational services. After a few years of conducting e-learning programs a debate took place within the executives and managers of the e-learning institutes concerning which activities are of the most influence on the learning progress of online students. This research is aimed to investigate the impact of a number of e-learning activities on the students’ learning development. The results show that participation in virtual classroom sessions has the most substantial impact on the students’ final grades. This paper presents the process of applying data mining methods to the web usage records of students’ activities in a virtual learning environment. The main idea is to rank the learning activities based on their importance in order to improve students’ performance by focusing on the most important ones.",
"title": ""
},
{
"docid": "3a68175de0dbc4c89b66678976898d1f",
"text": "The rapid accumulation of data in social media (in million and billion scales) has imposed great challenges in information extraction, knowledge discovery, and data mining, and texts bearing sentiment and opinions are one of the major categories of user generated data in social media. Sentiment analysis is the main technology to quickly capture what people think from these text data, and is a research direction with immediate practical value in big data era. Learning such techniques will allow data miners to perform advanced mining tasks considering real sentiment and opinions expressed by users in additional to the statistics calculated from the physical actions (such as viewing or purchasing records) user perform, which facilitates the development of real-world applications. However, the situation that most tools are limited to the English language might stop academic or industrial people from doing research or products which cover a wider scope of data, retrieving information from people who speak different languages, or developing applications for worldwide users. More specifically, sentiment analysis determines the polarities and strength of the sentiment-bearing expressions, and it has been an important and attractive research area. In the past decade, resources and tools have been developed for sentiment analysis in order to provide subsequent vital applications, such as product reviews, reputation management, call center robots, automatic public survey, etc. However, most of these resources are for the English language. Being the key to the understanding of business and government issues, sentiment analysis resources and tools are required for other major languages, e.g., Chinese. In this tutorial, audience can learn the skills for retrieving sentiment from texts in another major language, Chinese, to overcome this obstacle. The goal of this tutorial is to introduce the proposed sentiment analysis technologies and datasets in the literature, and give the audience the opportunities to use resources and tools to process Chinese texts from the very basic preprocessing, i.e., word segmentation and part of speech tagging, to sentiment analysis, i.e., applying sentiment dictionaries and obtaining sentiment scores, through step-by-step instructions and a hand-on practice. The basic processing tools are from CKIP Participants can download these resources, use them and solve the problems they encounter in this tutorial. This tutorial will begin from some background knowledge of sentiment analysis, such as how sentiment are categorized, where to find available corpora and which models are commonly applied, especially for the Chinese language. Then a set of basic Chinese text processing tools for word segmentation, tagging and parsing will be introduced for the preparation of mining sentiment and opinions. After bringing the idea of how to pre-process the Chinese language to the audience, I will describe our work on compositional Chinese sentiment analysis from words to sentences, and an application on social media text (Facebook) as an example. All our involved and recently developed related resources, including Chinese Morphological Dataset, Augmented NTU Sentiment Dictionary (ANTUSD), E-hownet with sentiment information, Chinese Opinion Treebank, and the CopeOpi Sentiment Scorer, will also be introduced and distributed in this tutorial. The tutorial will end by a hands-on session of how to use these materials and tools to process Chinese sentiment.",
"title": ""
},
{
"docid": "2eff0a817a48a2fd62e6f834d0389105",
"text": "In this paper, we demonstrate that image reconstruction can be expressed in terms of neural networks. We show that filtered backprojection can be mapped identically onto a deep neural network architecture. As for the case of iterative reconstruction, the straight forward realization as matrix multiplication is not feasible. Thus, we propose to compute the back-projection layer efficiently as fixed function and its gradient as projection operation. This allows a data-driven approach for joint optimization of correction steps in projection domain and image domain. As a proof of concept, we demonstrate that we are able to learn weightings and additional filter layers that consistently reduce the reconstruction error of a limited angle reconstruction by a factor of two while keeping the same computational complexity as filtered back-projection. We believe that this kind of learning approach can be extended to any common CT artifact compensation heuristic and will outperform hand-crafted artifact correction methods in the future.",
"title": ""
},
{
"docid": "4ea0ee8c40e2cc8ac5238eb6a3579414",
"text": "This paper suggests a method for Subject–Action–Object (SAO) network analysis of patents for technology trends identification by using the concept of function. The proposed method solves the shortcoming of the keyword-based approach to identification of technology trends, i.e., that it cannot represent how technologies are used or for what purpose. The concept of function provides information on how a technology is used and how it interacts with other technologies; the keyword-based approach does not provide such information. The proposed method uses an SAO model and represents “key concept” instead of “key word”. We present a procedure that formulates an SAO network by using SAO models extracted from patent documents, and a method that applies actor network theory to analyze technology implications of the SAO network. To demonstrate the effectiveness of the SAO network this paper presents a case study of patents related to Polymer Electrolyte Membrane technology in Proton Exchange Membrane Fuel Cells.",
"title": ""
},
{
"docid": "dac17254c16068a4dcf49e114bfcc822",
"text": "We present a novel coded exposure video technique for multi-image motion deblurring. The key idea of this paper is to capture video frames with a set of complementary fluttering patterns, which enables us to preserve all spectrum bands of a latent image and recover a sharp latent image. To achieve this, we introduce an algorithm for generating a complementary set of binary sequences based on the modern communication theory and implement the coded exposure video system with an off-the-shelf machine vision camera. To demonstrate the effectiveness of our method, we provide in-depth analyses of the theoretical bounds and the spectral gains of our method and other state-of-the-art computational imaging approaches. We further show deblurring results on various challenging examples with quantitative and qualitative comparisons to other computational image capturing methods used for image deblurring, and show how our method can be applied for protecting privacy in videos.",
"title": ""
},
{
"docid": "5f1c6bb714a9daeeec807117284e92f0",
"text": "One potential method to estimate noninvasive cuffless blood pressure (BP) is pulse wave velocity (PWV), which can be calculated by using the distance and the transit time of the blood between two arterial sites. To obtain the pulse waveform, bioimpedance (BI) measurement is a promising approach because it continuously reflects the change in BP through the change in the arterial cross-sectional area. Many studies have investigated BI channels in a vertical direction with electrodes located along the wrist and the finger to calculate PWV and convert to BP; however, the measurement systems were relatively large in size. In order to reduce the total device size for use in a PWV-based BP smartwatch, this study proposed and examined a robust horizontal BI structure. The BI device was also designed to apply in a very small body area. The proposed structure was based on two sets of four electrodes attached around the wrist. Our model was evaluated on 15 human subjects; the PWV values were obtained with various distances between two BI channels to assess the efficacy. The results showed that the designed BI system can monitor pulse rate efficiently in only a 0.5 × 1.75 cm² area of the body. The correlation of pulse rate from the proposed design against the reference was 0.98 ± 0.07 (p < 0.001). Our structure yielded higher detection ratios for PWV measurements of 99.0 ± 2.2%, 99.0 ± 2.1%, and 94.8 ± 3.7% at 1, 2, and 3 cm between two BI channels, respectively. The measured PWVs correlated well with the BP standard device at 0.81 ± 0.08 and 0.84 ± 0.07 with low root-mean-squared-errors at 7.47 ± 2.15 mmHg and 5.17 ± 1.81 mmHg for SBP and DBP, respectively. The result demonstrates the potential of a new wearable BP smartwatch structure.",
"title": ""
},
{
"docid": "8e0ac2ad99b819f0c1c36cfa4f20b0ef",
"text": "As a new distributed computing model, crowdsourcing lets people leverage the crowd's intelligence and wisdom toward solving problems. This article proposes a framework for characterizing various dimensions of quality control in crowdsourcing systems, a critical issue. The authors briefly review existing quality-control approaches, identify open issues, and look to future research directions. In the Web extra, the authors discuss both design-time and runtime approaches in more detail.",
"title": ""
},
{
"docid": "a0172830d69b0a386aa291235e5837a0",
"text": "There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms – such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) – requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-ofthe-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM’s ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.",
"title": ""
},
{
"docid": "9f40a57159a06ecd9d658b4d07a326b5",
"text": "_____________________________________________________________________________ The aim of the present study was to investigate a cytotoxic oxidative cell stress related and the antioxidant profile of kaempferol, quercetin, and isoquercitrin. The flavonol compounds were able to act as scavengers of superoxide anion (but not hydrogen peroxide), hypochlorous acid, chloramine and nitric oxide. Although flavonoids are widely described as antioxidants and this activity is generally related to beneficial effects on human health, here we show important cytotoxic actions of three well known flavonoids. They were able to promote hemolysis which one was exacerbated on the presence of hypochlorous acid but not by AAPH radical. Therefore, WWW.SCIELO.BR/EQ VOLUME 36, NÚMERO 2, 2011",
"title": ""
},
{
"docid": "ac1edbb7cef99be7127cb505faf7a082",
"text": "http://dujs.dartmouth.edu/2011/02/you-are-what-you-eat-how-food-affects-your-mood/ or thousands of years, people have believed that food could influence their health and wellbeing. Hippocrates, the father of modern medicine, once said: “Let your food be your medicine, and your medicine be your food” (1). In medieval times, people started to take great interest in how certain foods affected their mood and temperament. Many medical culinary textbooks of the time described the relationship between food and mood. For example, quince, dates and elderberries were used as mood enhancers, lettuce and chicory as tranquilizers, and apples, pomegranates, beef and eggs as erotic stimulants (1). The past 80 years have seen immense progress in research, primarily short-term human trials and animal studies, showing how certain foods change brain structure, chemistry, and physiology thus affecting mood and performance. These studies suggest that foods directly influencing brain neurotransmitter systems have the greatest effects on mood, at least temporarily. In turn, mood can also influence our food choices and expectations on the effects of certain foods can influence our perception.",
"title": ""
}
] |
scidocsrr
|
676c2fb0b8eea08a77d812ffa3ef15b9
|
Impact of Data Normalization on Stock Index Forecasting
|
[
{
"docid": "5fb09fd2436069e01ad2d9292769069c",
"text": "In this study, we propose a novel nonlinear ensemble forecasting model integrating generalized linear autoregression (GLAR) with artificial neural networks (ANN) in order to obtain accurate prediction results and ameliorate forecasting performances. We compare the new model’s performance with the two individual forecasting models—GLAR and ANN—as well as with the hybrid model and the linear combination models. Empirical results obtained reveal that the prediction using the nonlinear ensemble model is generally better than those obtained using the other models presented in this study in terms of the same evaluation measurements. Our findings reveal that the nonlinear ensemble model proposed here can be used as an alternative forecasting tool for exchange rates to achieve greater forecasting accuracy and improve prediction quality further. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5546f93f4c10681edb0fdfe3bf52809c",
"text": "The current applications of neural networks to in vivo medical imaging and signal processing are reviewed. As is evident from the literature neural networks have already been used for a wide variety of tasks within medicine. As this trend is expected to continue this review contains a description of recent studies to provide an appreciation of the problems associated with implementing neural networks for medical imaging and signal processing.",
"title": ""
}
] |
[
{
"docid": "d4724f6b007c914120508b2e694a31d9",
"text": "Finding semantically related words is a first step in the dire ct on of automatic ontology building. Guided by the view that similar words occur in simi lar contexts, we looked at the syntactic context of words to measure their semantic sim ilarity. Words that occur in a direct object relation with the verb drink, for instance, have something in common ( liquidity, ...). Co-occurrence data for common nouns and proper names , for several syntactic relations, was collected from an automatically parsed corp us of 78 million words of newspaper text. We used several vector-based methods to compute the distributional similarity between words. Using Dutch EuroWordNet as evaluation stand ard, we investigated which vector-based method and which combination of syntactic rel ations is the strongest predictor of semantic similarity.",
"title": ""
},
{
"docid": "c96fa07ef9860880d391a750826f5faf",
"text": "This paper presents the investigations of short-circuit current, electromagnetic force, and transient dynamic response of windings deformation including mechanical stress, strain, and displacements for an oil-immersed-type 220-kV power transformer. The worst-case fault with three-phase short-circuit happening simultaneously is assumed. A considerable leakage magnetic field excited by short-circuit current can produce the dynamical electromagnetic force to act on copper disks in each winding. The two-dimensional finite element method (FEM) is employed to obtain the electromagnetic force and its dynamical characteristics in axial and radial directions. In addition, to calculate the windings deformation accurately, we measured the nonlinear elasticity characteristic of spacer and built three-dimensional FE kinetic model to analyze the axial dynamic deformation. The results of dynamic mechanical stress and strain induced by combining of short-circuit force and prestress are useful for transformer design and fault diagnosis.",
"title": ""
},
{
"docid": "738a69ad1006c94a257a25c1210f6542",
"text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.",
"title": ""
},
{
"docid": "d0e2f8c9c7243f5a67e73faeb78038d1",
"text": "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.",
"title": ""
},
{
"docid": "d12d475dc72f695d3aecfb016229da19",
"text": "Following the increasing popularity of the mobile ecosystem, cybercriminals have increasingly targeted mobile ecosystems, designing and distributing malicious apps that steal information or cause harm to the device's owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach.To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls' sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and usergenerated inputs. We find that combining both static and dynamic analysis yields the best performance, with $F -$measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and investigate the reasons for inconsistent misclassifications across methods.",
"title": ""
},
{
"docid": "3f6fcee0073e7aaf587602d6510ed913",
"text": "BACKGROUND\nTreatment of early onset scoliosis (EOS) is challenging. In many cases, bracing will not be effective and growing rod surgery may be inappropriate. Serial, Risser casts may be an effective intermediate method of treatment.\n\n\nMETHODS\nWe studied 20 consecutive patients with EOS who received serial Risser casts under general anesthesia between 1999 and 2011. Analyses included diagnosis, sex, age at initial cast application, major curve severity, initial curve correction, curve magnitude at the time of treatment change or latest follow-up for those still in casts, number of casts per patient, the type of subsequent treatment, and any complications.\n\n\nRESULTS\nThere were 8 patients with idiopathic scoliosis, 6 patients with neuromuscular scoliosis, 5 patients with syndromic scoliosis, and 1 patient with skeletal dysplasia. Fifteen patients were female and 5 were male. The mean age at first cast was 3.8±2.3 years (range, 1 to 8 y), and the mean major curve magnitude was 74±18 degrees (range, 40 to 118 degrees). After initial cast application, the major curve measured 46±14 degrees (range, 25 to 79 degrees). At treatment change or latest follow-up for those still in casts, the major curve measured 53±24 degrees (range, 13 to 112 degrees). The mean time in casts was 16.9±9.1 months (range, 4 to 35 mo). The mean number of casts per patient was 4.7±2.2 casts (range, 1 to 9 casts). At the time of this study, 7 patients had undergone growing rod surgery, 6 patients were still undergoing casting, 5 returned to bracing, and 2 have been lost to follow-up. Four patients had minor complications: 2 patients each with superficial skin irritation and cast intolerance.\n\n\nCONCLUSIONS\nSerial Risser casting is a safe and effective intermediate treatment for EOS. It can stabilize relatively large curves in young children and allows the child to reach a more suitable age for other forms of treatment, such as growing rods.\n\n\nLEVEL OF EVIDENCE\nLevel IV; case series.",
"title": ""
},
{
"docid": "e1fd762bc710863f2df3fd6c41cf468b",
"text": "This paper analyzes the performances of Spearman’s rho (SR) and Kendall’s tau (KT) with respect to samples drawn from bivariate normal and contaminated normal populations. Theoretical and simulation results suggest that, contrary to the opinion of equivalence between SR and KT in some literature, the behaviors of SR and KT are strikingly different in the aspects of bias effect, variance, mean square error (MSE), and asymptotic relative efficiency (ARE). The new findings revealed in this work provide not only deeper insights into the two most widely used rank-based correlation coefficients, but also a guidance for choosing which one to use under the circumstances where Pearson’s product moment correlation coefficient (PPMCC) fails to apply. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "13e2b22875e1a23e9e8ea2f80671c74e",
"text": "This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.",
"title": ""
},
{
"docid": "cf17aefc8e4cb91c6fdb7c621651d41e",
"text": "Quantitative 13C NMR spectroscopy has been used to study the chemical structure of industrial kraft lignin, obtained from softwood pulping, and its nitrosated derivatives, which demonstrate high inhibition activity in the polymerization of unsaturated hydrocarbons.",
"title": ""
},
{
"docid": "e34a61754ff8cfac053af5cbedadd9e0",
"text": "An ongoing, annual survey of publications in systems and software engineering identifies the top 15 scholars and institutions in the field over a 5-year period. Each ranking is based on the weighted scores of the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software of the corresponding period. This report summarizes the results for 2003–2007 and 2004–2008. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea for 2003–2007, and Simula Research Laboratory, Norway for 2004–2008, while Magne Jørgensen is the top-ranked scholar for both periods.",
"title": ""
},
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
},
{
"docid": "e5638848a3844d7edf7dae7115233771",
"text": "Interest in gamification is growing steadily. But as the underlying mechanisms of gamification are not well understood yet, a closer examination of a gamified activity's meaning and individual game design elements may provide more insights. We examine the effects of points -- a basic element of gamification, -- and meaningful framing -- acknowledging participants' contribution to a scientific cause, -- on intrinsic motivation and performance in an online image annotation task. Based on these findings, we discuss implications and opportunities for future research on gamification.",
"title": ""
},
{
"docid": "2c329f3d77abe2d73bbddee34268c12f",
"text": "Various procedures of mixing starting powders for hot-pressing α-SiAlON ceramics were studied. They included different milling methods (attrition milling, ball milling, and sonication), liquid medium (water, isopropyl alcohol, and pyridine), and atmospheres (ambient air and nitrogen). These mixing procedures resulted in markedly different densification behavior and fired ceramics. As the powders experienced increasing oxidation because of mixing, the densification temperature decreased, the amount of residual glass increased, and α-SiAlON was destabilized and replaced by ß-SiAlON and AlN polytypes during hot pressing. These effects were mitigated when pyridine, nitrogen, and sonication were used. Several protocols that yielded nearly phase-pure, glass-free dense α-SiAlON were thus identified. Comments Copyright The American Ceramic Society. Reprinted from Journal of the American Ceramic Society, Volume 89, Issue 3, 2006, pages 1110-1113. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/mse_papers/103 The Effect of Powder Mixing Procedures on a-SiAlON Roman Shuba and I-Wei Chen Department of Materials Science and Engineering, University of Pennsylvania, Philadelphia, Pennsylvania 19104-6272 Various procedures of mixing starting powders for hot-pressing a-SiAlON ceramics were studied. They included different milling methods (attrition milling, ball milling, and sonication), liquid medium (water, isopropyl alcohol, and pyridine), and atmospheres (ambient air and nitrogen). These mixing procedures resulted in markedly different densification behavior and fired ceramics. As the powders experienced increasing oxidation because of mixing, the densification temperature decreased, the amount of residual glass increased, and a-SiAlON was destabilized and replaced by b-SiAlON and AlN polytypes during hot pressing. These effects were mitigated when pyridine, nitrogen, and sonication were used. Several protocols that yielded nearly phase-pure, glass-free dense a-SiAlON were thus identified.",
"title": ""
},
{
"docid": "a00cc13a716439c75a5b785407b02812",
"text": "A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs.",
"title": ""
},
{
"docid": "4b1948d0b09047baf27b95f5b416c8e7",
"text": "Recently, several pattern recognition methods have been proposed to automatically discriminate between patients with and without Alzheimer's disease using different imaging modalities: sMRI, fMRI, PET and SPECT. Classical approaches in visual information retrieval have been successfully used for analysis of structural MRI brain images. In this paper, we use the visual indexing framework and pattern recognition analysis based on structural MRI data to discriminate three classes of subjects: normal controls (NC), mild cognitive impairment (MCI) and Alzheimer's disease (AD). The approach uses the circular harmonic functions (CHFs) to extract local features from the most involved areas in the disease: hippocampus and posterior cingulate cortex (PCC) in each slice in all three brain projections. The features are quantized using the Bag-of-Visual-Words approach to build one signature by brain (subject). This yields a transformation of a full 3D image of brain ROIs into a 1D signature, a histogram of quantized features. To reduce the dimensionality of the signature, we use the PCA technique. Support vector machines classifiers are then applied to classify groups. The experiments were conducted on a subset of ADNI dataset and applied to the \"Bordeaux-3City\" dataset. The results showed that our approach achieves respectively for ADNI dataset and \"Bordeaux-3City\" dataset; for AD vs NC classification, an accuracy of 83.77% and 78%, a specificity of 88.2% and 80.4% and a sensitivity of 79.09% and 74.7%. For NC vs MCI classification we achieved for the ADNI datasets an accuracy of 69.45%, a specificity of 74.8% and a sensitivity of 62.52%. For the most challenging classification task (AD vs MCI), we reached an accuracy of 62.07%, a specificity of 75.15% and a sensitivity of 49.02%. The use of PCC visual features description improves classification results by more than 5% compared to the use of hippocampus features only. Our approach is automatic, less time-consuming and does not require the intervention of the clinician during the disease diagnosis.",
"title": ""
},
{
"docid": "5e6f9014a07e7b2bdfd255410a73b25f",
"text": "Context: Offshore software development outsourcing is a modern business strategy for developing high quality software at low cost. Objective: The objective of this research paper is to identify and analyse factors that are important in terms of the competitiveness of vendor organisations in attracting outsourcing projects. Method: We performed a systematic literature review (SLR) by applying our customised search strings which were derived from our research questions. We performed all the SLR steps, such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis. Results: We have identified factors such as cost-saving, skilled human resource, appropriate infrastructure, quality of product and services, efficient outsourcing relationships management, and an organisation’s track record of successful projects which are generally considered important by the outsourcing clients. Our results indicate that appropriate infrastructure, cost-saving, and skilled human resource are common in three continents, namely Asia, North America and Europe. We identified appropriate infrastructure, cost-saving, and quality of products and services as being common in three types of organisations (small, medium and large). We have also identified four factors-appropriate infrastructure, cost-saving, quality of products and services, and skilled human resource as being common in the two decades (1990–1999 and 2000–mid 2008). Conclusions: Cost-saving should not be considered as the driving factor in the selection process of software development outsourcing vendors. Vendors should rather address other factors in order to compete in as sk the OSDO business, such and services.",
"title": ""
},
{
"docid": "c576c08aa746ea30a528e104932047a6",
"text": "Despite tremendous progress achieved in temporal action localization, state-of-the-art methods still struggle to train accurate models when annotated data is scarce. In this paper, we introduce a novel active learning framework for temporal localization that aims to mitigate this data dependency issue. We equip our framework with active selection functions that can reuse knowledge from previously annotated datasets. We study the performance of two state-of-the-art active selection functions as well as two widely used active learning baselines. To validate the effectiveness of each one of these selection functions, we conduct simulated experiments on ActivityNet. We find that using previously acquired knowledge as a bootstrapping source is crucial for active learners aiming to localize actions. When equipped with the right selection function, our proposed framework exhibits significantly better performance than standard active learning strategies, such as uncertainty sampling. Finally, we employ our framework to augment the newly compiled Kinetics action dataset with ground-truth temporal annotations. As a result, we collect Kinetics-Localization, a novel large-scale dataset for temporal action localization, which contains more than 15K YouTube videos.",
"title": ""
},
{
"docid": "0bf292fdbc04805b4bd671d6f5099cf7",
"text": "We consider the stochastic optimization of finite sums over a Riemannian manifold where the functions are smooth and convex. We present MASAGA, an extension of the stochastic average gradient variant SAGA on Riemannian manifolds. SAGA is a variance-reduction technique that typically outperforms methods that rely on expensive full-gradient calculations, such as the stochastic variance-reduced gradient method. We show that MASAGA achieves a linear convergence rate with uniform sampling, and we further show that MASAGA achieves a faster convergence rate with non-uniform sampling. Our experiments show that MASAGA is faster than the recent Riemannian stochastic gradient descent algorithm for the classic problem of finding the leading eigenvector corresponding to the maximum eigenvalue.",
"title": ""
},
{
"docid": "67417a87eff4ad3b1d2a906a1f17abd2",
"text": "Epitaxial growth of A-A and A-B stacking MoS2 on WS2 via a two-step chemical vapor deposition method is reported. These epitaxial heterostructures show an atomic clean interface and a strong interlayer coupling, as evidenced by systematic characterization. Low-frequency Raman breathing and shear modes are observed in commensurate stacking bilayers for the first time; these can serve as persuasive fingerprints for interfacial quality and stacking configurations.",
"title": ""
},
{
"docid": "67f46f2866852372a78c7745d9e29a63",
"text": "The endosomal sorting complexes required for transport (ESCRTs) catalyse one of the most unusual membrane remodelling events in cell biology. ESCRT-I and ESCRT-II direct membrane budding away from the cytosol by stabilizing bud necks without coating the buds and without being consumed in the buds. ESCRT-III cleaves the bud necks from their cytosolic faces. ESCRT-III-mediated membrane neck cleavage is crucial for many processes, including the biogenesis of multivesicular bodies, viral budding, cytokinesis and, probably, autophagy. Recent studies of ultrastructures induced by ESCRT-III overexpression in cells and the in vitro reconstitution of the budding and scission reactions have led to breakthroughs in understanding these remarkable membrane reactions.",
"title": ""
}
] |
scidocsrr
|
66d70e9d7fece9d5c642f654dfc1c3a7
|
Characterization and management of exfoliative cheilitis: a single-center experience.
|
[
{
"docid": "3e2f4a96462ed5a12fbe0462272d013c",
"text": "Exfoliative cheilitis is an uncommon condition affecting the vermilion zone of the upper, lower or both lips. It is characterized by the continuous production and desquamation of unsightly, thick scales of keratin; when removed, these leave a normal appearing lip beneath. The etiology is unknown, although some cases may be factitious. Attempts at treatment by a wide variety of agents and techniques have been unsuccessful. Three patients with this disease are reported and its relationship to factitious cheilitis and candidal cheilitis is discussed.",
"title": ""
}
] |
[
{
"docid": "b7dd9d1cb89ec4aab21b9bb35cec1beb",
"text": "Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach.",
"title": ""
},
{
"docid": "ca722c65f7089f6fad369ce0f3d42abd",
"text": "A huge amount of texts available on the World Wide Web presents an unprecedented opportunity for information extraction (IE). One important assumption in IE is that frequent extractions are more likely to be correct. Sparse IE is hence a challenging task because no matter how big a corpus is, there are extractions supported by only a small amount of evidence in the corpus. However, there is limited research on sparse IE, especially in the assessment of the validity of sparse IEs. Motivated by this, we introduce a lightweight, explicit semantic approach for assessing sparse IE.1 We first use a large semantic network consisting of millions of concepts, entities, and attributes to explicitly model the context of any semantic relationship. Second, we learn from three semantic contexts using different base classifiers to select an optimal classification model for assessing sparse extractions. Finally, experiments show that as compared with several state-of-the-art approaches, our approach can significantly improve the F-score in the assessment of sparse extractions while maintaining the efficiency.",
"title": ""
},
{
"docid": "92cafadc922255249108ce4a0dad9b98",
"text": "Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5% over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2× faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices.",
"title": ""
},
{
"docid": "0801a2fd26263388a678d57bf7d2ff88",
"text": "In the past, conventional i-vectors based on a Universal Background Model (UBM) have been successfully used as input features to adapt a Deep Neural Network (DNN) Acoustic Model (AM) for Automatic Speech Recognition (ASR). In contrast, this paper introduces Hidden Markov Model (HMM) based ivectors that use HMM state alignment information from an ASR system for estimating i-vectors. Further, we propose passing these HMM based i-vectors though an explicit non-linear hidden layer of a DNN before combining them with standard acoustic features, such as log filter bank energies (LFBEs). To improve robustness to mismatched adaptation data, we also propose estimating i-vectors in a causal fashion for training the DNN, restricting the connectivity among hidden nodes in the DNN and applying a max-pool non-linearity at selected hidden nodes. In our experiments, these techniques yield about 5-7% relative word error rate (WER) improvement over the baseline speaker independent system in matched condition, and a substantial WER reduction for mismatched adaptation data.",
"title": ""
},
{
"docid": "30596d0edee0553117c5109eb948e1b6",
"text": "Spatial relationships between objects provide important information for text-based image retrieval. As users are more likely to describe a scene from a real world perspective, using 3D spatial relationships rather than 2D relationships that assume a particular viewing direction, one of the main challenges is to infer the 3D structure that bridges images with users text descriptions. However, direct inference of 3D structure from images requires learning from large scale annotated data. Since interactions between objects can be reduced to a limited set of atomic spatial relations in 3D, we study the possibility of inferring 3D structure from a text description rather than an image, applying physical relation models to synthesize holistic 3D abstract object layouts satisfying the spatial constraints present in a textual description. We present a generic framework for retrieving images from a textual description of a scene by matching images with these generated abstract object layouts. Images are ranked by matching object detection outputs (bounding boxes) to 2D layout candidates (also represented by bounding boxes) which are obtained by projecting the 3D scenes with sampled camera directions. We validate our approach using public indoor scene datasets and show that our method outperforms baselines built upon object occurrence histograms and learned 2D pairwise relations.",
"title": ""
},
{
"docid": "35293c16985878fca24b5a327fd52c72",
"text": "In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method – which we dub categorical generative adversarial networks (or CatGAN) – on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).",
"title": ""
},
{
"docid": "f11bfcebaa54f78c26ce7534e30c3fb8",
"text": "This article describes OpenTracker, an open software architecture that provides a framework for the different tasks involved in tracking input devices and processing multi-modal input data in virtual environments and augmented reality application. The OpenTracker framework eases the development and maintenance of hardware setups in a more flexible manner than what is typically offered by virtual reality development packages. This goal is achieved by using an object-oriented design based on XML, taking full advantage of this new technology by allowing to use standard XML tools for development, configuration and documentation. The OpenTracker engine is based on a data flow concept for multi-modal events. A multi-threaded execution model takes care of tunable performance. Transparent network access allows easy development of decoupled simulation models. Finally, the application developer's interface features both a time-based and an event based model, that can be used simultaneously, to serve a large range of applications. OpenTracker is a first attempt towards a \"'write once, input anywhere\"' approach to virtual reality application development. To support these claims, integration into an existing augmented reality system is demonstrated. We also show how a prototype tracking equipment for mobile augmented reality can be assembled from consumer input devices with the aid of OpenTracker. Once development is sufficiently mature, it is planned to make Open-Tracker available to the public under an open source software license.",
"title": ""
},
{
"docid": "d8d068254761619ccbcd0bbab896d3b2",
"text": "In this article we illustrate a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management.",
"title": ""
},
{
"docid": "f7d30db4b04b33676d386953aebf503c",
"text": "Microvascular free flap transfer currently represents one of the most popular methods for mandibularreconstruction. With the various free flap options nowavailable, there is a general consensus that no single kindof osseous or osteocutaneous flap can resolve the entire spectrum of mandibular defects. A suitable flap, therefore, should be selected according to the specific type of bone and soft tissue defect. We have developed an algorithm for mandibular reconstruction, in which the bony defect is termed as either “lateral” or “anterior” and the soft-tissue defect is classified as “none,” “skin or mucosal,” or “through-and-through.” For proper flap selection, the bony defect condition should be considered first, followed by the soft-tissue defect condition. When the bony defect is “lateral” and the soft tissue is not defective, the ilium is the best choice. When the bony defect is “lateral” and a small “skin or mucosal” soft-tissue defect is present, the fibula represents the optimal choice. When the bony defect is “lateral” and an extensive “skin or mucosal” or “through-and-through” soft-tissue defect exists, the scapula should be selected. When the bony defect is “anterior,” the fibula should always be selected. However, when an “anterior” bone defect also displays an “extensive” or “through-and-through” soft-tissue defect, the fibula should be usedwith other soft-tissue flaps. Flaps such as a forearm flap, anterior thigh flap, or rectus abdominis musculocutaneous flap are suitable, depending on the size of the soft-tissue defect.",
"title": ""
},
{
"docid": "622b0d9526dfee6abe3a605fa83e92ed",
"text": "Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.",
"title": ""
},
{
"docid": "b9779b478ee8714d5b0f6ce3e0857c9f",
"text": "Sensor-based motion recognition integrates the emerging area of wearable sensors with novel machine learning techniques to make sense of low-level sensor data and provide rich contextual information in a real-life application. Although Human Activity Recognition (HAR) problem has been drawing the attention of researchers, it is still a subject of much debate due to the diverse nature of human activities and their tracking methods. Finding the best predictive model in this problem while considering different sources of heterogeneities can be very difficult to analyze theoretically, which stresses the need of an experimental study. Therefore, in this paper, we first create the most complete dataset, focusing on accelerometer sensors, with various sources of heterogeneities. We then conduct an extensive analysis on feature representations and classification techniques (the most comprehensive comparison yet with 293 classifiers) for activity recognition. Principal component analysis is applied to reduce the feature vector dimension while keeping essential information. The average classification accuracy of eight sensor positions is reported to be 96.44% ± 1.62% with 10-fold evaluation, whereas accuracy of 79.92% ± 9.68% is reached in the subject-independent evaluation. This study presents significant evidence that we can build predictive models for HAR problem under more realistic conditions, and still achieve highly accurate results.",
"title": ""
},
{
"docid": "b10074ccf133a3c18a2029a5fe52f7ff",
"text": "Maneuvering vessel detection and tracking (VDT), incorporated with state estimation and trajectory prediction, are important tasks for vessel navigational systems (VNSs), as well as vessel traffic monitoring and information systems (VTMISs) to improve maritime safety and security in ocean navigation. Although conventional VNSs and VTMISs are equipped with maritime surveillance systems for the same purpose, intelligent capabilities for vessel detection, tracking, state estimation, and navigational trajectory prediction are underdeveloped. Therefore, the integration of intelligent features into VTMISs is proposed in this paper. The first part of this paper is focused on detecting and tracking of a multiple-vessel situation. An artificial neural network (ANN) is proposed as the mechanism for detecting and tracking multiple vessels. In the second part of this paper, vessel state estimation and navigational trajectory prediction of a single-vessel situation are considered. An extended Kalman filter (EKF) is proposed for the estimation of vessel states and further used for the prediction of vessel trajectories. Finally, the proposed VTMIS is simulated, and successful simulation results are presented in this paper.",
"title": ""
},
{
"docid": "c464a5f086f09d39b15beb3b3fbfec54",
"text": "Sweet cherry, a non-climacteric fruit, is usually cold-stored during post-harvest to prevent over-ripening. The aim of the study was to evaluate the role of abscisic acid (ABA) on fruit growth and ripening of this fruit, considering as well its putative implication in over-ripening and effects on quality. We measured the endogenous concentrations of ABA during the ripening of sweet cherries (Prunus avium L. var. Prime Giant) collected from orchard trees and in cherries exposed to 4°C and 23°C during 10 days of post-harvest. Furthermore, we examined to what extent endogenous ABA concentrations were related to quality parameters, such as fruit biomass, anthocyanin accumulation and levels of vitamins C and E. Endogenous concentrations of ABA in fruits increased progressively during fruit growth and ripening on the tree, to decrease later during post-harvest at 23°C. Cold treatment, however, increased ABA levels and led to an inhibition of over-ripening. Furthermore, ABA levels positively correlated with anthocyanin and vitamin E levels during pre-harvest, but not during post-harvest. We conclude that ABA plays a major role in sweet cherry development, stimulating its ripening process and positively influencing quality parameters during pre-harvest. The possible influence of ABA preventing over-ripening in cold-stored sweet cherries is also discussed.",
"title": ""
},
{
"docid": "3681c33edbb6f4d7ac370699b38e67c8",
"text": "The volume of adult content on the world wide web is increasing rapidly. This makes an automatic detection of adult content a more challenging task, when eliminating access to ill-suited websites. Most pornographic webpage–filtering systems are based on n-gram, naïve Bayes, K-nearest neighbor, and keyword-matching mechanisms, which do not provide perfect extraction of useful data from unstructured web content. These systems have no reasoning capability to intelligently filter web content to classify medical webpages from adult content webpages. In addition, it is easy for children to access pornographic webpages due to the freely available adult content on the Internet. It creates a problem for parents wishing to protect their children from such unsuitable content. To solve these problems, this paper presents a support vector machine (SVM) and fuzzy ontology–based semantic knowledge system to systematically filter web content and to identify and block access to pornography. The proposed system classifies URLs into adult URLs and medical URLs by using a blacklist of censored webpages to provide accuracy and speed. The proposed fuzzy ontology then extracts web content to find website type (adult content, normal, and medical) and block pornographic content. In order to examine the efficiency of the proposed system, fuzzy ontology, and intelligent tools are developed using Protégé 5.1 and Java, respectively. Experimental analysis shows that the performance of the proposed system is efficient for automatically detecting and blocking adult content.",
"title": ""
},
{
"docid": "b69e6bf80ad13a60819ae2ebbcc93ae0",
"text": "Computational manufacturing technologies such as 3D printing hold the potential for creating objects with previously undreamed-of combinations of functionality and physical properties. Human designers, however, typically cannot exploit the full geometric (and often material) complexity of which these devices are capable. This STAR examines recent systems developed by the computer graphics community in which designers specify higher-level goals ranging from structural integrity and deformation to appearance and aesthetics, with the final detailed shape and manufacturing instructions emerging as the result of computation. It summarizes frameworks for interaction, simulation, and optimization, as well as documents the range of general objectives and domain-specific goals that have been considered. An important unifying thread in this analysis is that different underlying geometric and physical representations are necessary for different tasks: we document over a dozen classes of representations that have been used for fabrication-aware design in the literature. We analyze how these classes possess obvious advantages for some needs, but have also been used in creative manners to facilitate unexpected problem solutions.",
"title": ""
},
{
"docid": "e7c77e563892c7807126c3feca79215a",
"text": "With the rapid increase in Android device popularity, the capabilities that the diverse user base demands from Android have significantly exceeded its original design. As a result, people have to seek ways to obtain the permissions not directly offered to ordinary users. A typical way to do that is using the Android Debug Bridge (ADB), a developer tool that has been granted permissions to use critical system resources. Apps adopting this solution have combined tens of millions of downloads on Google Play. However, we found that such ADB-level capabilities are not well guarded by Android. A prominent example we investigated is the apps that perform programmatic screenshots, a much-needed capability Android fails to support. We found that all such apps in the market inadvertently expose this ADB capability to any party with the INTERNET permission on the same device. With this exposure, a malicious app can be built to stealthily and intelligently collect sensitive user data through screenshots. To understand the threat, we built Screenmilker, an app that can detect the right moment to monitor the screen and pick up a user’s password when she is typing in real time. We show that this can be done efficiently by leveraging the unique design of smartphone user interfaces and its public resources. Such an understanding also informs Android developers how to protect this screenshot capability, should they consider providing an interface to let third-party developers use it in the future, and more generally the security risks of the ADB workaround, a standard technique gaining popularity in app development. Based on the understanding, we present a mitigation mechanism that controls the exposure of the ADB capabilities only to authorized apps.",
"title": ""
},
{
"docid": "dbd3234f12aff3ee0e01db8a16b13cad",
"text": "Information visualization has traditionally limited itself to 2D representations, primarily due to the prevalence of 2D displays and report formats. However, there has been a recent surge in popularity of consumer grade 3D displays and immersive head-mounted displays (HMDs). The ubiquity of such displays enables the possibility of immersive, stereoscopic visualization environments. While techniques that utilize such immersive environments have been explored extensively for spatial and scientific visualizations, contrastingly very little has been explored for information visualization. In this paper, we present our considerations of layout, rendering, and interaction methods for visualizing graphs in an immersive environment. We conducted a user study to evaluate our techniques compared to traditional 2D graph visualization. The results show that participants answered significantly faster with a fewer number of interactions using our techniques, especially for more difficult tasks. While the overall correctness rates are not significantly different, we found that participants gave significantly more correct answers using our techniques for larger graphs.",
"title": ""
},
{
"docid": "021f8f1a831e1f7a9b363bc240cc527b",
"text": "This paper presents a new graph-based approach that induces synsets using synonymy dictionaries and word embeddings. First, we build a weighted graph of synonyms extracted from commonly available resources, such as Wiktionary. Second, we apply word sense induction to deal with ambiguous words. Finally, we cluster the disambiguated version of the ambiguous input graph into synsets. Our meta-clustering approach lets us use an efficient hard clustering algorithm to perform a fuzzy clustering of the graph. Despite its simplicity, our approach shows excellent results, outperforming five competitive state-of-the-art methods in terms of F-score on three gold standard datasets for English and Russian derived from large-scale manually constructed lexical resources.",
"title": ""
},
{
"docid": "661b7615e660ae8e0a3b2a7294b9b921",
"text": "In this paper, a very simple solution-based method is employed to coat amorphous MnO2 onto crystalline SnO2 nanowires grown on stainless steel substrate, which utilizes the better electronic conductivity of SnO2 nanowires as the supporting backbone to deposit MnO2 for supercapacitor electrodes. Cyclic voltammetry (CV) and galvanostatic charge/discharge methods have been carried out to study the capacitive properties of the SnO2/MnO2 composites. A specific capacitance (based on MnO2) as high as 637 F g(-1) is obtained at a scan rate of 2 mV s(-1) (800 F g(-1) at a current density of 1 A g(-1)) in 1 M Na2SO4 aqueous solution. The energy density and power density measured at 50 A g(-1) are 35.4 W h kg(-1) and 25 kW kg(-1), respectively, demonstrating the good rate capability. In addition, the SnO2/MnO2 composite electrode shows excellent long-term cyclic stability (less than 1.2% decrease of the specific capacitance is observed after 2000 CV cycles). The temperature-dependent capacitive behavior is also discussed. Such high-performance capacitive behavior indicates that the SnO2/MnO2 composite is a very promising electrode material for fabricating supercapacitors.",
"title": ""
},
{
"docid": "342c95873edb988c3e055a1714753691",
"text": "KEY CLINICAL MESSAGE\nThanatophoric dysplasia is typically a neonatal lethal condition. However, for those rare individuals who do survive, there is the development of seizures, progression of craniocervical stenosis, ventilator dependence, and limitations in motor and cognitive abilities. Families must be made aware of these issues during the discussion of management plans.",
"title": ""
}
] |
scidocsrr
|
8afb9822659b7118f13c1a8847b836ab
|
Robust Sclera Recognition System With Novel Sclera Segmentation and Validation Techniques
|
[
{
"docid": "ae087768fe3e7464d4f1f12a03ffc877",
"text": "In this paper, we propose a novel sclera template generation, manipulation, and matching scheme for cancelable identity verification. Essentially, a region indicator matrix is generated based on an angular grid reference frame. For binary feature template generation, a random matrix and a local binary patterns (LBP) operator are utilized. Subsequently, the template is manipulated by user-specific random sequence attachment and bit shifting. Finally, matching is performed by a normalized Hamming distance comparison. Some experimental results on UBIRIS v1 database are included with discussion.",
"title": ""
}
] |
[
{
"docid": "e42c6d51324e5597d773e4c95960c76e",
"text": "In this chapter, we discuss the design of tangible interaction techniques for Mixed Reality environments. We begin by recalling some conceptual models of tangible interaction. Then, we propose an engineering-oriented software/hardware co-design process, based on our experience in developing tangible user interfaces. We present three different tangible user interfaces for real-world applications, and analyse the feedback from the user studies that we conducted. In summary, we conclude that, since tangible user interfaces are part of the real world and provide a seamless interaction with virtual words, they are well-adapted to mix together reality and virtuality. Hence, tangible interaction optimizes a users' virtual tasks, especially in manipulating and controlling 3D digital data in 3D space.",
"title": ""
},
{
"docid": "0cb490aacaf237bdade71479151ab8d2",
"text": "This brief presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. A comparison on commonly used generator polynomials between the proposed design and previously proposed parallel CRC algorithms shows that the proposed design can increase the speed by up to 25% and control or even reduce hardware cost",
"title": ""
},
{
"docid": "70d0f96d42467e1c998bb9969de55a39",
"text": "RGB-D cameras provide both a color image and a depth image which contains the real depth information about per-pixel. The richness of their data and the development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a novel hybrid visual odometry using an RGB-D camera. Different from the original method, it is a pure visual odometry method without any other information, such as inertial data. The important key is hybrid, which means that the odometry can be executed in two different processes depending on the conditions. It consists of two parts, including a feature-based visual odometry and a direct visual odometry. Details about the algorithm are discussed in the paper. Especially, the switch conditions are described in detail. Beside, we evaluate the continuity and robustness for the system on public dataset. The experiments demonstrate that our system has more stable continuity and better robustness.",
"title": ""
},
{
"docid": "d6477bab69274263bc208d19d9ec3ec2",
"text": "Software APIs often contain too many methods and parameters for developers to memorize or navigate effectively. Instead, developers resort to finding answers through online search engines and systems such as Stack Overflow. However, the process of finding and integrating a working solution is often very time-consuming. Though code search engines have increased in quality, there remain significant language- and workflow-gaps in meeting end-user needs. Novice and intermediate programmers often lack the language to query, and the expertise in transferring found code to their task. To address this problem, we present CodeMend, a system to support finding and integration of code. CodeMend leverages a neural embedding model to jointly model natural language and code as mined from large Web and code datasets. We also demonstrate a novel, mixed-initiative, interface to support query and integration steps. Through CodeMend, end-users describe their goal in natural language. The system makes salient the relevant API functions, the lines in the end-user's program that should be changed, as well as proposing the actual change. We demonstrate the utility and accuracy of CodeMend through lab and simulation studies.",
"title": ""
},
{
"docid": "ca7e4eafed84f5dbe5f996ac7c795c91",
"text": "This paper examines the effects of review arousal on perceived helpfulness of online reviews, and on consumers’ emotional responses elicited by the reviews. Drawing on emotion theories in psychology and neuroscience, we focus on four emotions – anger, anxiety, excitement, and enjoyment that are common in the context of online reviews. The effects of the four emotions embedded in online reviews were examined using a controlled experiment. Our preliminary results show that reviews embedded with the four emotions (arousing reviews) are perceived to be more helpful than reviews without the emotions embedded (non-arousing reviews). However, reviews embedded with anxiety and enjoyment (low-arousal reviews) are perceived to be more helpfulness that reviews embedded with anger and excitement (high-arousal reviews). Furthermore, compared to reviews embedded with anger, reviews embedded with anxiety are associated with a higher EEG activity that is generally linked to negative emotions. The results suggest a non-linear relationship between review arousal and perceived helpfulness, which can be explained by the consumers’ emotional responses elicited by the reviews.",
"title": ""
},
{
"docid": "7f61235bb8b77376936256dcf251ee0b",
"text": "These practical guidelines for the biological treatment of personality disorders in primary care settings were developed by an international Task Force of the World Federation of Societies of Biological Psychiatry (WFSBP). They embody the results of a systematic review of all available clinical and scientific evidence pertaining to the biological treatment of three specific personality disorders, namely borderline, schizotypal and anxious/avoidant personality disorder in addition to some general recommendations for the whole field. The guidelines cover disease definition, classification, epidemiology, course and current knowledge on biological underpinnings, and provide a detailed overview on the state of the art of clinical management. They deal primarily with biological treatment (including antidepressants, neuroleptics, mood stabilizers and some further pharmacological agents) and discuss the relative significance of medication within the spectrum of treatment strategies that have been tested for patients with personality disorders, up to now. The recommendations should help the clinician to evaluate the efficacy spectrum of psychotropic drugs and therefore to select the drug best suited to the specific psychopathology of an individual patient diagnosed for a personality disorder.",
"title": ""
},
{
"docid": "d063f8a20e2b6522fe637794e27d7275",
"text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.",
"title": ""
},
{
"docid": "30f0583f57317b9def629c7e81c934d8",
"text": "The growth in the size of networks and the number of vulnerabilities is increasingly challenging to manage network security. Especially, difficult to manage are multi-step attacks which are attacks using one or more vulnerabilities as stepping stones. Attack graphs are widely used for analyzing multi-step attacks. However, since these graphs had large sizes, it was too expensive to work with. In this paper, we propose a mechanism to manage attack graphs using a divide and conquer approach. To enhance efficiency of risk analyzer working with attack graphs, we converted a large graph to multiple sub-graphs named risk units and provide the light-weighted graphs to the analyzers. As a result, when k order of time complexity algorithms work with an attack graph with n vertices, a division having c of overhead vertices reduces the workloads from nk to r(n + c)k. And the coefficient r becomes smaller geometrically from 2−kdepended on their division rounds. By this workload reduction, risk assessment processes which work with large size attack graphs become more scalable and resource practical.",
"title": ""
},
{
"docid": "f7de95bb35f7f53518f6c86e06ce9e48",
"text": "Domain Generation Algorithms (DGAs) are a popular technique used by contemporary malware for command-and-control (C&C) purposes. Such malware utilizes DGAs to create a set of domain names that, when resolved, provide information necessary to establish a link to a C&C server. Automated discovery of such domain names in real-time DNS traffic is critical for network security as it allows to detect infection, and, in some cases, take countermeasures to disrupt the communication and identify infected machines. Detection of the specific DGA malware family provides the administrator valuable information about the kind of infection and steps that need to be taken. In this paper we compare and evaluate machine learning methods that classify domain names as benign or DGA, and label the latter according to their malware family. Unlike previous work, we select data for test and training sets according to observation time and known seeds. This allows us to assess the robustness of the trained classifiers for detecting domains generated by the same families at a different time or when seeds change. Our study includes tree ensemble models based on human-engineered features and deep neural networks that learn features automatically from domain names. We find that all state-of-the-art classifiers are significantly better at catching domain names from malware families with a time-dependent seed compared to time-invariant DGAs. In addition, when applying the trained classifiers on a day of real traffic, we find that many domain names unjustifiably are flagged as malicious, thereby revealing the shortcomings of relying on a standard whitelist for training a production grade DGA detection system.",
"title": ""
},
{
"docid": "0fd635cfbcbd2d648f5c25ce2cb551a5",
"text": "The main focus of relational learning for knowledge graph completion (KGC) lies in exploiting rich contextual information for facts. Many state-of-the-art models incorporate fact sequences, entity types, and even textual information. Unfortunately, most of them do not fully take advantage of rich structural information in a KG, i.e., connectivity patterns around each entity. In this paper, we propose a context-aware convolutional learning (CACL) model which jointly learns from entities and their multi-hop neighborhoods. Since we directly utilize the connectivity patterns contained in each multi-hop neighborhood, the structural role similarity among entities can be better captured, resulting in more informative entity and relation embeddings. Specifically, CACL collects entities and relations from the multi-hop neighborhood as contextual information according to their relative importance and uniquely maps them to a linear vector space. Our convolutional architecture leverages a deep learning technique to represent each entity along with its linearly mapped contextual information. Thus, we can elaborately extract the features of key connectivity patterns from the context and incorporate them into a score function which evaluates the validity of facts. Experimental results on the newest datasets show that CACL outperforms existing approaches by successfully enriching embeddings with neighborhood information.",
"title": ""
},
{
"docid": "f5e44676e9ce8a06bcdb383852fb117f",
"text": "We explore techniques to significantly improve the compute efficiency and performance of Deep Convolution Networks without impacting their accuracy. To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values. We achieve the highest reported accuracy of 76.6% Top-1/93% Top-5 on the Imagenet object classification challenge with low-precision network while reducing the compute requirement by ∼3× compared to a full-precision network that achieves similar accuracy. Furthermore, to fully exploit the benefits of our low-precision networks, we build a deep learning accelerator core, DLAC, that can achieve up to 1 TFLOP/mm2 equivalent for single-precision floating-point operations (∼2 TFLOP/mm2 for half-precision), which is ∼5× better than Linear Algebra Core [16] and ∼4× better than previous deep learning accelerator proposal [8].",
"title": ""
},
{
"docid": "092b55732087aef57a1164c228c00d8b",
"text": "Penetration of advanced sensor systems such as advanced metering infrastructure (AMI), high-frequency overhead and underground current and voltage sensors have been increasing significantly in power distribution systems over the past few years. According to U.S. energy information administration (EIA), the aggregated AMI installation experienced a 17 times increase from 2007 to 2012. The AMI usually collects electricity usage data every 15 minute, instead of once a month. This is a 3,000 fold increase in the amount of data utilities would have processed in the past. It is estimated that the electricity usage data collected through AMI in the U.S. amount to well above 100 terabytes in 2012. To unleash full value of the complex data sets, innovative big data algorithms need to be developed to transform the way we operate and plan for the distribution system. This paper not only proposes promising applications but also provides an in-depth discussion of technical and regulatory challenges and risks of big data analytics in power distribution systems. In addition, a flexible system architecture design is proposed to handle heterogeneous big data analysis workloads.",
"title": ""
},
{
"docid": "f81dd0c86a7b45e743e4be117b4030c2",
"text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.",
"title": ""
},
{
"docid": "137449952a30730185552ed6fca4d8ba",
"text": "BACKGROUND\nPoor sleep quality and depression negatively impact the health-related quality of life of patients with type 2 diabetes, but the combined effect of the two factors is unknown. This study aimed to assess the interactive effects of poor sleep quality and depression on the quality of life in patients with type 2 diabetes.\n\n\nMETHODS\nPatients with type 2 diabetes (n = 944) completed the Diabetes Specificity Quality of Life scale (DSQL) and questionnaires on sleep quality and depression. The products of poor sleep quality and depression were added to the logistic regression model to evaluate their multiplicative interactions, which were expressed as the relative excess risk of interaction (RERI), the attributable proportion (AP) of interaction, and the synergy index (S).\n\n\nRESULTS\nPoor sleep quality and depressive symptoms both increased DSQL scores. The co-presence of poor sleep quality and depressive symptoms significantly reduced DSQL scores by a factor of 3.96 on biological interaction measures. The relative excess risk of interaction was 1.08. The combined effect of poor sleep quality and depressive symptoms was observed only in women.\n\n\nCONCLUSIONS\nPatients with both depressive symptoms and poor sleep quality are at an increased risk of reduction in diabetes-related quality of life, and this risk is particularly high for women due to the interaction effect. Clinicians should screen for and treat sleep difficulties and depressive symptoms in patients with type 2 diabetes.",
"title": ""
},
{
"docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442",
"text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.",
"title": ""
},
{
"docid": "8feb5dce809acf0efb63d322f0526fcf",
"text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.",
"title": ""
},
{
"docid": "c6399386c27aa8d039094d23e76aed8e",
"text": "Spin systems and harmonic oscillators comprise two archetypes in quantum mechanics. The spin-1/2 system, with two quantum energy levels, is essentially the most nonlinear system found in nature, whereas the harmonic oscillator represents the most linear, with an infinite number of evenly spaced quantum levels. A significant difference between these systems is that a two-level spin can be prepared in an arbitrary quantum state using classical excitations, whereas classical excitations applied to an oscillator generate a coherent state, nearly indistinguishable from a classical state. Quantum behaviour in an oscillator is most obvious in Fock states, which are states with specific numbers of energy quanta, but such states are hard to create. Here we demonstrate the controlled generation of multi-photon Fock states in a solid-state system. We use a superconducting phase qubit, which is a close approximation to a two-level spin system, coupled to a microwave resonator, which acts as a harmonic oscillator, to prepare and analyse pure Fock states with up to six photons. We contrast the Fock states with coherent states generated using classical pulses applied directly to the resonator.",
"title": ""
},
{
"docid": "1eb2715d2dfec82262c7b3870db9b649",
"text": "Leadership is a crucial component to the success of academic health science centers (AHCs) within the shifting U.S. healthcare environment. Leadership talent acquisition and development within AHCs is immature and approaches to leadership and its evolution will be inevitable to refine operations to accomplish the critical missions of clinical service delivery, the medical education continuum, and innovations toward discovery. To reach higher organizational outcomes in AHCs requires a reflection on what leadership approaches are in place and how they can better support these missions. Transactional leadership approaches are traditionally used in AHCs and this commentary suggests that movement toward a transformational approach is a performance improvement opportunity for AHC leaders. This commentary describes the transactional and transformational approaches, how they complement each other, and how to access the transformational approach. Drawing on behavioral sciences, suggestions are made on how a transactional leader can change her cognitions to align with the four dimensions of the transformational leadership approach.",
"title": ""
},
{
"docid": "408f58b7dd6cb1e6be9060f112773888",
"text": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.",
"title": ""
},
{
"docid": "a433ebaeeb5dc5b68976b3ecb770c0cd",
"text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01",
"title": ""
}
] |
scidocsrr
|
1db8ca5e4f9226fa3e746ebfe8f93ac3
|
Happiness Is Everything , or Is It ? Explorations on the Meaning of Psychological Well-Being
|
[
{
"docid": "10990c819cbc6dfb88b4c2de829f27f1",
"text": "Building on the fraudulent foundation established by atheist Sigmund Freud, psychoanalyst Erik Erikson has proposed a series of eight \"life cycles,\" each with an accompanying \"life crisis,\" to explain both human behavior and man's religious tendencies. Erikson's extensive application of his theories to the life of Martin Luther reveals his contempt for the living God who has revealed Himself in Scripture. This paper will consider Erikson's view of man, sin, redemption, and religion, along with an analysis of his eight \"life cycles.\" Finally, we will critique his attempted psychoanalysis of Martin Luther.",
"title": ""
}
] |
[
{
"docid": "cb086fa252f4db172b9c7ac7e1081955",
"text": "Drivable free space information is vital for autonomous vehicles that have to plan evasive maneu vers in realtime. In this paper, we present a new efficient met hod for environmental free space detection with laser scann er based on 2D occupancy grid maps (OGM) to be used for Advance d Driving Assistance Systems (ADAS) and Collision Avo idance Systems (CAS). Firstly, we introduce an enhanced in verse sensor model tailored for high-resolution laser scanners f or building OGM. It compensates the unreflected beams and deals with the ray casting to grid cells accuracy and computationa l effort problems. Secondly, we introduce the ‘vehicle on a circle for grid maps’ map alignment algorithm that allows building more accurate local maps by avoiding the computationally expensive inaccurate operations of image sub-pixel shifting a nd rotation. The resulted grid map is more convenient for ADAS f eatures than existing methods, as it allows using less memo ry sizes, and hence, results into a better real-time performance. Thirdly, we present an algorithm to detect what we call the ‘in-sight edges’. These edges guarantee modeling the free space area with a single polygon of a fixed number of vertices regardless th e driving situation and map complexity. The results from real world experiments show the effectiveness of our approach. Keywords— Occupancy Grid Map; Static Free Space Detection; Advanced Driving Assistance Systems; las er canner; autonomous driving",
"title": ""
},
{
"docid": "94f39416ba9918e664fb1cd48732e3ae",
"text": "In this paper, a nanostructured biosensor is developed to detect glucose in tear by using fluorescence resonance energy transfer (FRET) quenching mechanism. The designed FRET pair, including the donor, CdSe/ZnS quantum dots (QDs), and the acceptor, dextran-binding malachite green (MG-dextran), was conjugated to concanavalin A (Con A), an enzyme with specific affinity to glucose. In the presence of glucose, the quenched emission of QDs through the FRET mechanism is restored by displacing the dextran from Con A. To have a dual-modulation sensor for convenient and accurate detection, the nanostructured FRET sensors were assembled onto a patterned ZnO nanorod array deposited on the synthetic silicone hydrogel. Consequently, the concentration of glucose detected by the patterned sensor can be converted to fluorescence spectra with high signal-to-noise ratio and calibrated image pixel value. The photoluminescence intensity of the patterned FRET sensor increases linearly with increasing concentration of glucose from 0.03mmol/L to 3mmol/L, which covers the range of tear glucose levels for both diabetics and healthy subjects. Meanwhile, the calibrated values of pixel intensities of the fluorescence images captured by a handhold fluorescence microscope increases with increasing glucose. Four male Sprague-Dawley rats with different blood glucose concentrations were utilized to demonstrate the quick response of the patterned FRET sensor to 2µL of tear samples.",
"title": ""
},
{
"docid": "8ccb5aeb084c9a6223dc01fa296d908e",
"text": "Effective chronic disease management is essential to improve positive health outcomes, and incentive strategies are useful in promoting self-care with longevity. Gamification, applied with mHealth (mobile health) applications, has the potential to better facilitate patient self-management. This review article addresses a knowledge gap around the effective use of gamification design principles, or mechanics, in developing mHealth applications. Badges, leaderboards, points and levels, challenges and quests, social engagement loops, and onboarding are mechanics that comprise gamification. These mechanics are defined and explained from a design and development perspective. Health and fitness applications with gamification mechanics include: bant which uses points, levels, and social engagement, mySugr which uses challenges and quests, RunKeeper which uses leaderboards as well as social engagement loops and onboarding, Fitocracy which uses badges, and Mango Health, which uses points and levels. Specific design considerations are explored, an example of the efficacy of a gamified mHealth implementation in facilitating improved self-management is provided, limitations to this work are discussed, a link between the principles of gaming and gamification in health and wellness technologies is provided, and suggestions for future work are made. We conclude that gamification could be leveraged in developing applications with the potential to better facilitate self-management in persons with chronic conditions.",
"title": ""
},
{
"docid": "398c791338adf824a81a2bfb8f35c6bb",
"text": "Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2 TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewingallowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) a system for supporting 2D tiled displays, with Omegalib a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.",
"title": ""
},
{
"docid": "f042dd6b78c65541e657c48452a1e0e4",
"text": "We present a general framework for semantic role labeling. The framework combines a machine-learning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stagethe pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.",
"title": ""
},
{
"docid": "7138c13d88d87df02c7dbab4c63328c4",
"text": "Banisteriopsis caapi is the basic ingredient of ayahuasca, a psychotropic plant tea used in the Amazon for ritual and medicinal purposes, and by interested individuals worldwide. Animal studies and recent clinical research suggests that B. caapi preparations show antidepressant activity, a therapeutic effect that has been linked to hippocampal neurogenesis. Here we report that harmine, tetrahydroharmine and harmaline, the three main alkaloids present in B. caapi, and the harmine metabolite harmol, stimulate adult neurogenesis in vitro. In neurospheres prepared from progenitor cells obtained from the subventricular and the subgranular zones of adult mice brains, all compounds stimulated neural stem cell proliferation, migration, and differentiation into adult neurons. These findings suggest that modulation of brain plasticity could be a major contribution to the antidepressant effects of ayahuasca. They also expand the potential application of B. caapi alkaloids to other brain disorders that may benefit from stimulation of endogenous neural precursor niches.",
"title": ""
},
{
"docid": "f69113c023a9900be69fd6109c6d5d30",
"text": "The IETF designed the Routing Protocol for Low power and Lossy Networks (RPL) as a candidate for use in constrained networks. Keeping in mind the different requirements of such networks, the protocol was designed to support multiple routing topologies, called DODAGs, constructed using different objective functions, so as to optimize routing based on divergent metrics. A DODAG versioning system is incorporated into RPL in order to ensure that the topology does not become stale and that loops are not formed over time. However, an attacker can exploit this versioning system to gain an advantage in the topology and also acquire children that would be forced to route packets via this node. In this paper we present a study of possible attacks that exploit the DODAG version system. The impact on overhead, delivery ratio, end-to-end delay, rank inconsistencies and loops is studied.",
"title": ""
},
{
"docid": "7f0a6e9a1bcdf8b12ac4273138eb7523",
"text": "The graph-search algorithms developed between 60s and 80s were widely used in many fields, from robotics to video games. The A* algorithm shall be mentioned between some of the most important solutions explicitly oriented to motion-robotics, improving the logic of graph search with heuristic principles inside the loop. Nevertheless, one of the most important drawbacks of the A* algorithm resides in the heading constraints connected with the grid characteristics. Different solutions were developed in the last years to cope with this problem, based on postprocessing algorithms or on improvements of the graph-search algorithm itself. A very important one is Theta* that refines the graph search allowing to obtain paths with “any” heading. In the last two years, the Flight Mechanics Research Group of Politecnico di Torino studied and implemented different path planning algorithms. A L. De Filippis (B) · G. Guglieri · F. Quagliotti Dipartimento di Ingegneria Aeronautica e Spaziale, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Turin, Italy e-mail: luca.defilippis@polito.it G. Guglieri e-mail: giorgio.guglieri@polito.it F. Quagliotti e-mail: fulvia.quagliotti@polito.it Matlab based planning tool was developed, collecting four separate approaches: geometric predefined trajectories, manual waypoint definition, automatic waypoint distribution (i.e. optimizing camera payload capabilities) and a comprehensive A*-based algorithm used to generate paths, minimizing risk of collision with orographic obstacles. The tool named PCube exploits Digital Elevation Maps (DEMs) to assess the risk maps and it can be used to generate waypoint sequences for UAVs autopilots. In order to improve the A*-based algorithm, the solution is extended to tri-dimensional environments implementing a more effective graph search (based on Theta*). In this paper the application of basic Theta* to tridimensional path planning will be presented. Particularly, the algorithm is applied to orographic obstacles and in urban environments, to evaluate the solution for different kinds of obstacles. Finally, a comparison with the A* algorithm will be introduced as a metric of the algorithm",
"title": ""
},
{
"docid": "abb54a0c155805e7be2602265f78ae79",
"text": "In this paper we sketch out a computational theory of spatial cognition motivated by navigational behaviours, ecological requirements, and neural mechanisms as identified in animals and man. Spatial cognition is considered in the context of a cognitive agent built around the action-perception cycle. Besides sensors and effectors, the agent comprises multiple memory structures including a working memory and a longterm memory stage. Spatial longterm memory is modeled along the graph approach, treating recognizable places or poses as nodes and navigational actions as links. Models of working memory and its interaction with reference memory are discussed. The model provides an overall framework of spatial cognition which can be adapted to model different levels of behavioural complexity as well as interactions between working and longterm memory. A number of design questions for building cognitive robots are derived from comparison with biological systems and discussed in the paper.",
"title": ""
},
{
"docid": "cd527e5a6aefe889ee4ac56d70cc834e",
"text": "In this paper we analyze Tendermint proposed in [7], one of the most popular blockchains based on PBFT Consensus. The current paper dissects Tendermint under various system communication models and Byzantine adversaries. Our methodology consists in identifying the algorithmic principles of Tendermint necessary for a specific combination of communication model adversary. This methodology allowed to identify bugs [3] in preliminary versions of the protocol ([19], [7]) and to prove its correctness under the most adversarial conditions: an eventually synchronous communication model and asymmetric Byzantine faults.",
"title": ""
},
{
"docid": "7332f08a9447fd321f7e40609cfabfc0",
"text": "Requirements Engineering und Management gewinnen in allen Bereichen der Systementwicklung stetig an Bedeutung. Zusammenhänge zwischen der Qualität der Anforderungserhebung und des Projekterfolges, wie von der Standish Group im jährlich erscheinenden Chaos Report [Standish 2004] untersucht, sind den meisten ein Begriff. Bei der Erhebung von Anforderungen treten immer wieder ähnliche Probleme auf. Dabei spielen unterschiedliche Faktoren und Gegebenheiten eine Rolle, die beachtet werden müssen. Es gibt mehrere Möglichkeiten, die Tücken der Analysephase zu meistern; eine Hilfe bietet der Einsatz der in diesem Artikel vorgestellten Methoden zur Anforderungserhebung. Auch wenn die Anforderungen korrekt und vollständig erhoben sind, ist es eine Kunst, diese zu verwalten. In der heutigen Zeit der verteilten Projekte ist es eine Herausforderung, die Dokumentation für jeden Beteiligten ständig verfügbar, nachvollziehbar und eindeutig zu erhalten. Requirements Management rüstet den Analytiker mit Methoden aus, um sich dieser Herausforderung zu stellen. Änderungen von Stakeholder-Wünschen an bestehenden Anforderungen stellen besondere Ansprüche an das Requirements Management, doch mithilfe eines Change-Management-Prozesses können auch diese bewältigt werden. Metriken und Traceability unterstützen bei der Aufwandsabschätzung für Änderungsanträge.",
"title": ""
},
{
"docid": "09f36704e0bbd914f7ce6b5c7e0da228",
"text": "Studies have repeatedly shown that users are increasingly concerned about their privacy when they go online. In response to both public interest and regulatory pressures, privacy policies have become almost ubiquitous. An estimated 77% of websites now post a privacy policy. These policies differ greatly from site to site, and often address issues that are different from those that users care about. They are in most cases the users' only source of information.This paper evaluates the usability of online privacy policies, as well as the practice of posting them. We analyze 64 current privacy policies, their accessibility, writing, content and evolution over time. We examine how well these policies meet user needs and how they can be improved. We determine that significant changes need to be made to current practice to meet regulatory and usability requirements.",
"title": ""
},
{
"docid": "293f102f8e6cedb4b93856224f081272",
"text": "In this paper, we propose a decision-based, signal-adaptive median filtering algorithm for removal of impulse noise. Our algorithm achieves accurate noise detection and high SNR measures without smearing the fine details and edges in the image. The notion of homogeneity level is defined for pixel values based on their global and local statistical properties. The cooccurrence matrix technique is used to represent the correlations between a pixel and its neighbors, and to derive the upper and lower bound of the homogeneity level. Noise detection is performed at two stages: noise candidates are first selected using the homogeneity level, and then a refining process follows to eliminate false detections. The noise detection scheme does not use a quantitative decision measure, but uses qualitative structural information, and it is not subject to burdensome computations for optimization of the threshold values. Empirical results indicate that our scheme performs significantly better than other median filters, in terms of noise suppression and detail preservation.",
"title": ""
},
{
"docid": "32b2cd6b63c6fc4de5b086772ef9d319",
"text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.",
"title": ""
},
{
"docid": "8a077d963a9df5528583388c3e1a229d",
"text": "Context-aware recommender systems improve context-free recommenders by exploiting the knowledge of the contextual situation under which a user experienced and rated an item. They use data sets of contextually-tagged ratings to predict how the target user would evaluate (rate) an item in a given contextual situation, with the ultimate goal to recommend the items with the best estimated ratings. This paper describes and evaluates a pre-filtering approach to context-aware recommendation, called distributional-semantics pre-filtering (DSPF), which exploits in a novel way the distributional semantics of contextual conditions to build more precise context-aware rating prediction models. In DSPF, given a target contextual situation (of a target user), a matrix-factorization predictive model is built by using the ratings tagged with the contextual situations most similar to the target one. Then, this model is used to compute rating predictions and identify recommendations for that specific target contextual situation. In the proposed approach, the definition of the similarity of contextual situations is based on the distributional semantics of their composing conditions: situations are similar if they influence the user’s ratings in a similar way. This notion of similarity has the advantage of being directly derived from the rating data; hence it does not require a context taxonomy. We analyze the effectiveness of DSPF varying the specific method used to compute the situation-to-situation similarity. We also show how DSPF can be further improved by using clustering techniques. Finally, we evaluate DSPF on several contextually-tagged data sets and demonstrate that it outperforms state-of-the-art context-aware approaches.",
"title": ""
},
{
"docid": "9ccbd750bd39e0451d98a7371c2b0914",
"text": "The aim of this study was to assess the effect of inspiratory muscle training (IMT) on resistance to fatigue of the diaphragm (D), parasternal (PS), sternocleidomastoid (SCM) and scalene (SC) muscles in healthy humans during exhaustive exercise. Daily inspiratory muscle strength training was performed for 3 weeks in 10 male subjects (at a pressure threshold load of 60% of maximal inspiratory pressure (MIP) for the first week, 70% of MIP for the second week, and 80% of MIP for the third week). Before and after training, subjects performed an incremental cycle test to exhaustion. Maximal inspiratory pressure and EMG-analysis served as indices of inspiratory muscle fatigue assessment. The before-to-after exercise decreases in MIP and centroid frequency (fc) of the EMG (D, PS, SCM, and SC) power spectrum (P<0.05) were observed in all subjects before the IMT intervention. Such changes were absent after the IMT. The study found that in healthy subjects, IMT results in significant increase in MIP (+18%), a delay of inspiratory muscle fatigue during exhaustive exercise, and a significant improvement in maximal work performance. We conclude that the IMT elicits resistance to the development of inspiratory muscles fatigue during high-intensity exercise.",
"title": ""
},
{
"docid": "4bfbf4b3135241b2e8d61a954c8fe7c8",
"text": "This study examined adolescents' emotional reactivity to parents' marital conflict as a mediator of the association between triangulation and adolescents' internalizing problems in a sample of 2-parent families (N = 416)[corrected]. Four waves of annual, multiple-informant data were analyzed (youth ages 11-15 years). The authors used structural equation modeling and found that triangulation was associated with increases in adolescents' internalizing problems, controlling for marital hostility and adolescent externalizing problems. There also was an indirect pathway from triangulation to internalizing problems across time through youths' emotional reactivity. Moderating analyses indicated that the 2nd half of the pathway, the association between emotional reactivity and increased internalizing problems, characterized youth with lower levels of hopefulness and attachment to parents. The findings help detail why triangulation is a risk factor for adolescents' development and which youth will profit most from interventions focused on emotional regulation.",
"title": ""
},
{
"docid": "6a6b47d95cf79792e053efde77bee014",
"text": "Wind energy conversion systems have become a focal point in the research of renewable energy sources. This is in no small part due to the rapid advances in the size of wind generators as well as the development of power electronics and their applicability in wind energy extraction. This paper provides a comprehensive review of past and present converter topologies applicable to permanent magnet generators, induction generators, synchronous generators and doubly fed induction generators. The many different generator-converter combinations are compared on the basis of topology, cost, efficiency, power consumption and control complexity. The features of each generator-converter configuration are considered in the context of wind turbine systems",
"title": ""
},
{
"docid": "bbb9ac7170663ce653ec9cb40db8695b",
"text": "What we believe to be a novel three-dimensional (3D) phase unwrapping algorithm is proposed to unwrap 3D wrapped-phase volumes. It depends on a quality map to unwrap the most reliable voxels first and the least reliable voxels last. The technique follows a discrete unwrapping path to perform the unwrapping process. The performance of this technique was tested on both simulated and real wrapped-phase maps. And it is found to be robust and fast compared with other 3D phase unwrapping algorithms.",
"title": ""
}
] |
scidocsrr
|
bd56c9412a60ba12ec9f8bf2a266c83c
|
Distance Fields for Rapid Collision Detection in Physically Based Modeling
|
[
{
"docid": "05894f874111fd55bd856d4768c61abe",
"text": "Collision detection is of paramount importance for many applications in computer graphics and visualization. Typically, the input to a collision detection algorithm is a large number of geometric objects comprising an environment, together with a set of objects moving within the environment. In addition to determining accurately the contacts that occur between pairs of objects, one needs also to do so at real-time rates. Applications such as haptic force-feedback can require over 1,000 collision queries per second. In this paper, we develop and analyze a method, based on bounding-volume hierarchies, for efficient collision detection for objects moving within highly complex environments. Our choice of bounding volume is to use a “discrete orientation polytope” (“k-dop”), a convex polytope whose facets are determined by halfspaces whose outward normals come from a small fixed set of k orientations. We compare a variety of methods for constructing hierarchies (“BV-trees”) of bounding k-dops. Further, we propose algorithms for maintaining an effective BV-tree of k-dops for moving objects, as they rotate, and for performing fast collision detection using BV-trees of the moving objects and of the environment. Our algorithms have been implemented and tested. We provide experimental evidence showing that our approach yields substantially faster collision detection than previous methods.",
"title": ""
}
] |
[
{
"docid": "db9e401e4c2bdee1187389c340541877",
"text": "We show in this paper how some algebraic methods can be used for fingerprint matching. The described technique is able to compute the score of a match also when the template and test fingerprints have been not correctly acquired. In particular, the match is independent of translations, rotations and scaling transformations of the template. The technique is also able to compute a match score when part of the fingerprint image is incorrect or missed. The algorithm is being implemented in CoCoA, a computer algebra system for doing computations in Commutative Algebra.",
"title": ""
},
{
"docid": "51624e6c70f4eb5f2295393c68ee386c",
"text": "Advances in mobile technologies and devices has changed the way users interact with devices and other users. These new interaction methods and services are offered by the help of intelligent sensing capabilities, using context, location and motion sensors. However, indoor location sensing is mostly achieved by utilizing radio signal (Wi-Fi, Bluetooth, GSM etc.) and nearest neighbor identification. The most common algorithm adopted for Received Signal Strength (RSS)-based location sensing is K Nearest Neighbor (KNN), which calculates K nearest neighboring points to mobile users (MUs). Accordingly, in this paper, we aim to improve the KNN algorithm by enhancing the neighboring point selection by applying k-means clustering approach. In the proposed method, k-means clustering algorithm groups nearest neighbors according to their distance to mobile user. Then the closest group to the mobile user is used to calculate the MU's location. The evaluation results indicate that the performance of clustered KNN is closely tied to the number of clusters, number of neighbors to be clustered and the initiation of the center points in k-mean algorithm. Keywords-component; Received signal strength, k-Means, clustering, location estimation, personal digital assistant (PDA), wireless, indoor positioning",
"title": ""
},
{
"docid": "70fa03bcd9c5eec86050052ea77d30fd",
"text": "The importance of SMEs SMEs (small and medium-sized enterprises) account for 60 to 70 per cent of jobs in most OECD countries, with a particularly large share in Italy and Japan, and a relatively smaller share in the United States. Throughout they also account for a disproportionately large share of new jobs, especially in those countries which have displayed a strong employment record, including the United States and the Netherlands. Some evidence points also to the importance of age, rather than size, in job creation: young firms generate more than their share of employment. However, less than one-half of start-ups survive for more than five years and only a fraction develop into the high-growth firms which make important contributions to job creation. High job turnover poses problems for employment security; and small establishments are often exempt from giving notice to their employees. Small firms also tend to invest less in training and rely relatively more on external recruitment for raising competence. The demand for reliable, relevant and internationally comparable data on SMEs is on the rise, and statistical offices have started to expand their collection and publication of data. International comparability is still weak, however, due to divergent size-class definitions and sector classifications. To enable useful policy analysis, OECD governments need to improve their build-up of data, without creating additional obstacles for firms through the burden of excessive paper work. The greater variance in profitability, survival and growth of SMEs compared to larger firms accounts for special problems in financing. SMEs generally tend to be confronted with higher interest rates, as well as credit rationing due to shortage of collateral. The issues that arise in financing differ considerably between existing and new firms, as well as between those which grow slowly and those that grow rapidly. The expansion of private equity markets, including informal markets, has greatly improved the access to venture capital for start-ups and SMEs, but considerable differences remain among countries. Regulatory burdens remain a major obstacle for SMEs as these firms tend to be poorly equipped to deal with the problems arising from regulations. Access to information about regulations should be made available to SMEs at minimum cost. Policy makers must ensure that the compliance procedures associated with, e.g. R&D and new technologies, are not unnecessarily costly, complex or lengthy. Transparency is of particular importance to SMEs, and information technology has great potential to narrow the information …",
"title": ""
},
{
"docid": "a112cd31e136054bdf9d34c82b960d95",
"text": "We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "235ed0d7a20b67e227db9e35a3865d2b",
"text": "convolutional neural networks are the most widely used deep learning algorithms for traffic signal classification till date[1] but they fail to capture pose, view, orientation of the images because of the intrinsic inability of max pooling layer.This paper proposes a novel method for Traffic sign detection using deep learning architecture called capsule networks that achieves outstanding performance on the German traffic sign dataset.Capsule network consists of capsules which are a group of neurons representing the instantiating parameters of an object like the pose and orientation[2] by using the dynamic routing and route by agreement algorithms.unlike the previous approaches of manual feature extraction,multiple deep neural networks with many parameters,our method eliminates the manual effort and provides resistance to the spatial variances.CNNs ́ can be fooled easily using various adversary attacks[3] and capsule networks can overcome such attacks from the intruders and can offer more reliability in traffic sign detection for autonomous vehicles.Capsule network have achieved the state-of-the-art accuracy of 97.6% on German Traffic Sign Recognition Benchmark dataset (GTSRB).",
"title": ""
},
{
"docid": "4aed0c391351671ccb5297b2fe9d4891",
"text": "Applying evolution to generate simple agent behaviours has become a successful and heavily used practice. However the notion of scaling up behaviour into something more noteworthy and complex is far from elementary. In this paper we propose a method of combining neuroevolution practices with the subsumption paradigm; in which we generate Artificial Neural Network (ANN) layers ordered in a hierarchy such that high-level controllers can override lower behaviours. To explore this proposal we apply our controllers to the dasiaEvoTankspsila domain; a small, dynamic, adversarial environment. Our results show that once layers are evolved we can generate competent and capable results that can deal with hierarchies of multiple layers. Further analysis of results provides interesting insights into design decisions for such controllers, particularly when compared to the original suggestions for the subsumption paradigm.",
"title": ""
},
{
"docid": "8bb30efa3f14fa0860d1e5bc1265c988",
"text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U",
"title": ""
},
{
"docid": "4a8c9a2301ea45d6c18ec5ab5a75a2ba",
"text": "We propose in this paper a computer vision-based posture recognition method for home monitoring of the elderly. The proposed system performs human detection prior to the posture analysis; posture recognition is performed only on a human silhouette. The human detection approach has been designed to be robust to different environmental stimuli. Thus, posture is analyzed with simple and efficient features that are not designed to manage constraints related to the environment but only designed to describe human silhouettes. The posture recognition method, based on fuzzy logic, identifies four static postures and is robust to variation in the distance between the camera and the person, and to the person's morphology. With an accuracy of 74.29% of satisfactory posture recognition, this approach can detect emergency situations such as a fall within a health smart home.",
"title": ""
},
{
"docid": "b15b88a31cc1762618ca976bdf895d57",
"text": "How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections. First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional (1,000+ pixels) natural images not seen during training. Crucially, traditional non-plastic recurrent networks fail to solve this task. Furthermore, trained plastic networks can also solve generic meta-learning tasks such as the Omniglot task, with competitive results and little parameter overhead. Finally, in reinforcement learning settings, plastic networks outperform a non-plastic equivalent in a maze exploration task. We conclude that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.",
"title": ""
},
{
"docid": "a2cf369a67507d38ac1a645e84525497",
"text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.",
"title": ""
},
{
"docid": "9ade6407ce2603e27744df1b03728bfc",
"text": "We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone. We employ a quantized Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint using an SVD-based compression scheme. Additionally, we minimize our memory footprint by using a single language model for both dictation and voice command domains, constructed using Bayesian interpolation. Finally, in order to properly handle device-specific information, such as proper names and other context-dependent information, we inject vocabulary items into the decoder graph and bias the language model on-the-fly. Our system achieves 13.5% word error rate on an open-ended dictation task, running with a median speed that is seven times faster than real-time.",
"title": ""
},
{
"docid": "fe2594f98faa2ceda8b2c25bddc722d1",
"text": "This study aimed at investigating the effect of a suggested EFL Flipped Classroom Teaching Model (EFL-FCTM) on graduate students' English higher-order thinking skills (HOTS), engagement and satisfaction. Also, it investigated the relationship between higher-order thinking skills, engagement and satisfaction. The sample comprised (67) graduate female students; an experimental group (N=33) and a control group (N=34), studying an English course at Taif University, KSA. The study used mixed method design; a pre-post HOTS test was carried out and two 5-Likert scale questionnaires had been designed and distributed; an engagement scale and a satisfaction scale. The findings of the study revealed statistically significant differences between the two group in HOTS in favor of the experimental group. Also, there was significant difference between the pre and post administration of the engagement scale in favor of the post administration. Moreover, students satisfaction on the (EFL-FCTM) was high. Finally, there were high significant relationships between HOTS and student engagement, HOTS and satisfaction and between student engagement and satisfaction.",
"title": ""
},
{
"docid": "617e92bba5d9bd93eaae1718c1da276c",
"text": "This paper describes MAISE, an embedded linear circuit simulator for use mainly within timing and noise analysis tools. MAISE achieves the fastest possible analysis performance over a wide range of circuit sizes and topologies by an adaptive architecture that allows applying the most efficient combination of model reduction algorithms and linear solvers for each class of circuits. The main pillar of adaptability in MAISE is a novel nodal-analysis formulation (PNA) which permits the use of symmetric, positive-definite Cholesky solvers for all circuit topologies. Moreover, frequently occurring special cases, e.g., inductor-resistor tree structures result in particular types of matrices that are solved by an even faster linear time algorithm. Model order reduction algorithms employed in MAISE exploit symmetry and positive-definiteness whenever available and use symmetric-Lanczos iteration and nonstandard inner-products for generating the Krylov subspace basis. The efficiency of the new simulator is supported by a wide range of industrial examples.",
"title": ""
},
{
"docid": "66c2fcf1076796bb0a7fa16b18eac612",
"text": "A firewall is a security guard placed at the point of entry between a private network and the outside Internet such that all incoming and outgoing packets have to pass through it. The function of a firewall is to examine every incoming or outgoing packet and decide whether to accept or discard it. This function is conventionally specified by a sequence of rules, where rules often conflict. To resolve conflicts, the decision for each packet is the decision of the first rule that the packet matches. The current practice of designing a firewall directly as a sequence of rules suffers from three types of major problems: (1) the consistency problem, which means that it is difficult to order the rules correctly; (2) the completeness problem, which means that it is difficult to ensure thorough consideration for all types of traffic; (3) the compactness problem, which means that it is difficult to keep the number of rules small (because some rules may be redundant and some rules may be combined into one rule). To achieve consistency, completeness, and compactness, we propose a new method called Structured Firewall Design, which consists of two steps. First, one designs a firewall using a Firewall Decision Diagram instead of a sequence of often conflicting rules. Second, a program converts the firewall decision diagram into a compact, yet functionally equivalent, sequence of rules. This method addresses the consistency problem because a firewall decision diagram is conflict-free. It addresses the completeness problem because the syntactic requirements of a firewall decision diagram force the designer to consider all types of traffic. It also addresses the compactness problem because in the second step we use two algorithms (namely FDD reduction and FDD marking) to combine rules together, and one algorithm (namely Firewall compaction) to remove redundant rules. Moreover, the techniques and algorithms presented in this paper are extensible to other rule-based systems such as IPsec rules.",
"title": ""
},
{
"docid": "e5ddbe32d1beed6de2e342c5d5fea274",
"text": "Link prediction appears as a central problem of network science, as it calls for unfolding the mechanisms that govern the micro-dynamics of the network. In this work, we are interested in ego-networks, that is the mere information of interactions of a node to its neighbors, in the context of social relationships. As the structural information is very poor, we rely on another source of information to predict links among egos’ neighbors: the timing of interactions. We define several features to capture different kinds of temporal information and apply machine learning methods to combine these various features and improve the quality of the prediction. We demonstrate the efficiency of this temporal approach on a cellphone interaction dataset, pointing out features which prove themselves to perform well in this context, in particular the temporal profile of interactions and elapsed time between contacts.",
"title": ""
},
{
"docid": "12c947a09e6dbaeca955b18900912b96",
"text": "A two stages car detection method using deformable part models with composite feature sets (DPM/CF) is proposed to recognize cars of various types and from multiple viewing angles. In the first stage, a HOG template is matched to detect the bounding box of the entire car of a certain type and viewed from a certain angle (called a t/a pair), which yields a region of interest (ROI). In the second stage, various part detectors using either HOG or the convolution neural network (CNN) features are applied to the ROI for validation. An optimization procedure based on latent logistic regression is adopted to select the optimal part detector's location, window size, and feature to use. Extensive experimental results indicate the proposed DPM/CF system can strike a balance between detection accuracy and training complexity.",
"title": ""
},
{
"docid": "27136e888c3ebfef4ea7105d68a13ffd",
"text": "The huge amount of (potentially) available spectrum makes millimeter wave (mmWave) a promising candidate for fifth generation cellular networks. Unfortunately, differences in the propagation environment as a function of frequency make it hard to make comparisons between systems operating at mmWave and microwave frequencies. This paper presents a simple channel model for evaluating system level performance in mmWave cellular networks. The model uses insights from measurement results that show mmWave is sensitive to blockages revealing very different path loss characteristics between line-of-sight (LOS) and non-line-of-sight (NLOS) links. The conventional path loss model with a single log-distance path loss function and a shadowing term is replaced with a stochastic path loss model with a distance-dependent LOS probability and two different path loss functions to account for LOS and NLOS links. The proposed model is used to compare microwave and mmWave networks in simulations. It is observed that mmWave networks can provide comparable coverage probability with a dense deployment, leading to much higher data rates thanks to the large bandwidth available in the mmWave spectrum.",
"title": ""
},
{
"docid": "68a5b5664afe1d75811e5f0346455689",
"text": "Personality, as defined in psychology, accounts for the individual differences in users’ preferences and behaviour. It has been found that there are significant correlations between personality and users’ characteristics that are traditionally used by recommender systems ( e.g. music preferences, social media behaviour, learning styles etc.). Among the many models of personality, the Five Factor Model (FFM) appears suitable for usage in recommender systems as it can be quantitatively measured (i.e. numerical values for each of the factors, namely, openness, conscientiousness, extraversion, agreeableness and neuroticism). The acquisition of the personality factors for an observed user can be done explicitly through questionnaires or implicitly using machine learning techniques with features extracted from social media streams or mobile phone call logs. There are, although limited, a number of available datasets to use in offline recommender systems experiment. Studies have shown that personality was successful at tackling the cold-start problem, making group recommendations, addressing cross-domain preferences4 and at generating diverse recommendations. However, a number of challenges still remain.",
"title": ""
},
{
"docid": "28445e19325130be11eae6d21963489e",
"text": "Social media is often viewed as a sensor into various societal events such as disease outbreaks, protests, and elections. We describe the use of social media as a crowdsourced sensor to gain insight into ongoing cyber-attacks. Our approach detects a broad range of cyber-attacks (e.g., distributed denial of service (DDoS) attacks, data breaches, and account hijacking) in a weakly supervised manner using just a small set of seed event triggers and requires no training or labeled samples. A new query expansion strategy based on convolution kernels and dependency parses helps model semantic structure and aids in identifying key event characteristics. Through a large-scale analysis over Twitter, we demonstrate that our approach consistently identifies and encodes events, outperforming existing methods.",
"title": ""
},
{
"docid": "d7538c23aa43edce6cfde8f2125fd3bb",
"text": "We propose a holographic-laser-drawing volumetric display using a computer-generated hologram displayed on a liquid crystal spatial light modulator and multilayer fluorescent screen. The holographic-laser-drawing technique has enabled three things; (i) increasing the number of voxels of the volumetric graphics per unit time; (ii) increasing the total input energy to the volumetric display because the maximum energy incident at a point in the multilayer fluorescent screen is limited by the damage threshold; (iii) controlling the size, shape and spatial position of voxels. In this paper, we demonstrated (i) and (ii). The multilayer fluorescent screen was newly developed to display colored voxels. The thin layer construction of the multilayer fluorescent screen minimized the axial length of the voxels. A two-color volumetric display with blue-green voxels and red voxels were demonstrated.",
"title": ""
}
] |
scidocsrr
|
a7c17eaa960b048e176856545ff58fd7
|
SERM: A Recurrent Model for Next Location Prediction in Semantic Trajectories
|
[
{
"docid": "1527c70d0b78a3d2aa6886282425c744",
"text": "Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.",
"title": ""
},
{
"docid": "153721e9da56e400558f9ec6d4011aac",
"text": "Periodicity is a frequently happening phenomenon for moving objects. Finding periodic behaviors is essential to understanding object movements. However, periodic behaviors could be complicated, involving multiple interleaving periods, partial time span, and spatiotemporal noises and outliers.\n In this paper, we address the problem of mining periodic behaviors for moving objects. It involves two sub-problems: how to detect the periods in complex movement, and how to mine periodic movement behaviors. Our main assumption is that the observed movement is generated from multiple interleaved periodic behaviors associated with certain reference locations. Based on this assumption, we propose a two-stage algorithm, Periodica, to solve the problem. At the first stage, the notion of observation spot is proposed to capture the reference locations. Through observation spots, multiple periods in the movement can be retrieved using a method that combines Fourier transform and autocorrelation. At the second stage, a probabilistic model is proposed to characterize the periodic behaviors. For a specific period, periodic behaviors are statistically generalized from partial movement sequences through hierarchical clustering. Empirical studies on both synthetic and real data sets demonstrate the effectiveness of our method.",
"title": ""
}
] |
[
{
"docid": "af2dbc8d3a04fb3059263b8c367ac856",
"text": "The area of sentiment mining (also called sentiment extraction, opinion mining, opinion extraction, sentiment analysis, etc.) has seen a large increase in academic interest in the last few years. Researchers in the areas of natural language processing, data mining, machine learning, and others have tested a variety of methods of automating the sentiment analysis process. In this research work, new hybrid classification method is proposed based on coupling classification methods using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble was designed using Naïve Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA). In the proposed work, a comparative study of the effectiveness of ensemble technique is made for sentiment classification. The feasibility and the benefits of the proposed approaches are demonstrated by means of restaurant review that is widely used in the field of sentiment classification. A wide range of comparative experiments are conducted and finally, some in-depth discussion is presented and conclusions are drawn about the effectiveness of ensemble technique for sentiment classification. Keywords— Accuracy, Arcing classifier, Genetic Algorithm (GA). Naïve Bayes (NB), Sentiment Mining, Support Vector Machine (SVM)",
"title": ""
},
{
"docid": "ff3392832942da723a6a5184669a06a8",
"text": "The past few years has seen the rapid growth of data mining approaches for the analysis of data obtained from Massive Open Online Courses (MOOCs). The objectives of this study are to develop approaches to predict the scores a student may achieve on a given grade-related assessment based on information, considered as prior performance or prior activity in the course. We develop a personalized linear multiple regression (PLMR) model to predict the grade for a student, prior to attempting the assessment activity. The developed model is real-time and tracks the participation of a student within a MOOC (via click-stream server logs) and predicts the performance of a student on the next assessment within the course offering. We perform a comprehensive set of experiments on data obtained from two openEdX MOOCs via a Stanford University initiative. Our experimental results show the promise of the proposed approach in comparison to baseline approaches and also helps in identification of key features that are associated with the study habits and learning behaviors of students.",
"title": ""
},
{
"docid": "6ef985d656f605d40705a582483d562e",
"text": "A rising issue in the scientific community entails the identification of patterns in the evolution of the scientific enterprise and the emergence of trends that influence scholarly impact. In this direction, this paper investigates the mechanism with which citation accumulation occurs over time and how this affects the overall impact of scientific output. Utilizing data regarding the SOFSEM Conference (International Conference on Current Trends in Theory and Practice of Computer Science), we study a corpus of 1006 publications with their associated authors and affiliations to uncover the effects of collaboration on the conference output. We proceed to group publications into clusters based on the trajectories they follow in their citation acquisition. Representative patterns are identified to characterize dominant trends of the conference, while exploring phenomena of early and late recognition by the scientific community and their correlation with impact.",
"title": ""
},
{
"docid": "0939a703cb2eeb9396c4e681f95e1e4d",
"text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.",
"title": ""
},
{
"docid": "d836e5c3ef7742b6dfb47c46672fa251",
"text": "Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.",
"title": ""
},
{
"docid": "432e7ae2e76d76dbb42d92cd9103e3d2",
"text": "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.",
"title": ""
},
{
"docid": "866f7fa780b24fe420623573482df984",
"text": "We present the prenatal ultrasound findings of massive macroglossia in a fetus with prenatally diagnosed Beckwith-Wiedemann syndrome. Three-dimensional surface mode ultrasound was utilized for enhanced visualization of the macroglossia.",
"title": ""
},
{
"docid": "f45b7caf3c599a6de835330c39599570",
"text": "Describes an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. The authors' method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. The authors evaluate their method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that the authors' method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75% true positive rate. For a baseline, they also compared the ground truth against a second hand-labeling, yielding a 90% true positive and a 4% false positive detection rate, on average. These numbers suggest there is still room for a 15% true positive rate improvement, with the same false positive rate, over the authors' method. They are making all their images and hand labelings publicly available for interested researchers to use in evaluating related methods.",
"title": ""
},
{
"docid": "84ece888e2302d13775973f552c6b810",
"text": "We present a qualitative study of hospitality exchange processes that take place via the online peer-to-peer platform Airbnb. We explore 1) what motivates individuals to monetize network hospitality and 2) how the presence of money ties in with the social interaction related to network hospitality. We approach the topic from the perspective of hosts -- that is, Airbnb users who participate by offering accommodation for other members in exchange for monetary compensation. We found that participants were motivated to monetize network hospitality for both financial and social reasons. Our analysis indicates that the presence of money can provide a helpful frame for network hospitality, supporting hosts in their efforts to accomplish desired sociability, select guests consistent with their preferences, and control the volume and type of demand. We conclude the paper with a critical discussion of the implications of our findings for network hospitality and, more broadly, for the so-called sharing economy.",
"title": ""
},
{
"docid": "97a9f11cf142c251364da09a264026ab",
"text": "We consider techniques for permuting a sparse matrix so that the diagonal of the permuted matrix has entries of large absolute value. We discuss various criteria for this and consider their implementation as computer codes. We then indicate several cases where such a permutation can be useful. These include the solution of sparse equations by a direct method and by an iterative technique. We also consider its use in generating a preconditioner for an iterative method. We see that the effect of these reorderings can be dramatic although the best a priori strategy is by no means clear.",
"title": ""
},
{
"docid": "1ab59137961e9a9f3a347d5331ce7be1",
"text": "Peer-to-peer networks have been quite thoroughly measured over the past years, however it is interesting to note that the BitTorrent Mainline DHT has received very little attention even though it is by far the largest of currently active overlay systems, as our results show. As Mainline DHT differs from other systems, existing measurement methodologies are not appropriate for studying it. In this paper we present an efficient methodology for estimating the number of active users in the network. We have identified an omission in previous methodologies used to measure the size of the network and our methodology corrects this. Our method is based on modeling crawling inaccuracies as a Bernoulli process. It guarantees a very accurate estimation and is able to provide the estimate in about 5 seconds. Through experiments in controlled situations, we demonstrate the accuracy of our method and show the causes of the inaccuracies in previous work, by reproducing the incorrect results. Besides accurate network size estimates, our methodology can be used to detect network anomalies, in particular Sybil attacks in the network. We also report on the results from our measurements which have been going on for almost 2.5 years and are the first long-term study of Mainline DHT.",
"title": ""
},
{
"docid": "6f2dfe7dad77b55635ce279bd4c2acdd",
"text": "Designing of biologically active scaffolds with optimal characteristics is one of the key factors for successful tissue engineering. Recently, hydrogels have received a considerable interest as leading candidates for engineered tissue scaffolds due to their unique compositional and structural similarities to the natural extracellular matrix, in addition to their desirable framework for cellular proliferation and survival. More recently, the ability to control the shape, porosity, surface morphology, and size of hydrogel scaffolds has created new opportunities to overcome various challenges in tissue engineering such as vascularization, tissue architecture and simultaneous seeding of multiple cells. This review provides an overview of the different types of hydrogels, the approaches that can be used to fabricate hydrogel matrices with specific features and the recent applications of hydrogels in tissue engineering. Special attention was given to the various design considerations for an efficient hydrogel scaffold in tissue engineering. Also, the challenges associated with the use of hydrogel scaffolds were described.",
"title": ""
},
{
"docid": "e373e44d5d4445ca56a45b4800b93740",
"text": "In recent years a great deal of research efforts in ship hydromechanics have been devoted to practical navigation problems in moving larger ships safely into existing harbours and inland waterways and to ease congestion in existing shipping routes. The starting point of any navigational or design analysis lies in the accurate determination of the hydrodynamic forces generated on the ship hull moving in confined waters. The analysis of such ship motion should include the effects of shallow water. An area of particular interest is the determination of ship resistance in shallow or restricted waters at different speeds, forming the basis for the power calculation and design of the propulsion system. The present work describes the implementation of CFD techniques for determining the shallow water resistance of a river-sea ship at different speeds. The ship hull flow is analysed for different ship speeds in shallow water conditions. The results obtained from CFD analysis are compared with available standard results.",
"title": ""
},
{
"docid": "447b689d9c7c2a6b71baf2fac2fa2a4f",
"text": "Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract Various routing protocols, including Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (ISIS), explicitly allow \"Equal-Cost Multipath\" (ECMP) routing. Some router implementations also allow equal-cost multipath usage with RIP and other routing protocols. The effect of multipath routing on a forwarder is that the forwarder potentially has several next-hops for any given destination and must use some method to choose which next-hop should be used for a given data packet.",
"title": ""
},
{
"docid": "1c60ddeb7e940992094cb8f3913e811a",
"text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet",
"title": ""
},
{
"docid": "9f1acbd886cdf792fcaeafad9bfdfed3",
"text": "In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then “diagnose the problem”, before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web. In this paper, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by technical support scams.",
"title": ""
},
{
"docid": "c6d1ad31d52ed40d2fdba3c5840cbb63",
"text": "Classification is one of the most active research and application areas of neural networks. The literature is vast and growing. This paper summarizes the some of the most important developments in neural network classification research. Specifically, the issues of posterior probability estimation, the link between neural and conventional classifiers, learning and generalization tradeoff in classification, the feature variable selection, as well as the effect of misclassification costs are examined. Our purpose is to provide a synthesis of the published research in this area and stimulate further research interests and efforts in the identified topics.",
"title": ""
},
{
"docid": "f3c0479308b50a66646a99f55d19b310",
"text": "In the course of the More Electric Aircraft program frequently active three-phase rectifiers in the power range of several kilowatts are required. It is shown that the three-phase -switch rectifier (comprising three -connected bidirectional switches) is well suited for this application. The system is analyzed using space vector calculus and a novel PWM current controller modulation concept is presented, where all three phases are controlled simultaneously; the analysis shows that the proposed concept yields optimal switching sequences. Analytical relationships for calculating the power components average and rms current ratings are derived to facilitate the rectifier design. A laboratory prototype with an output power of 5 kW is built and measurements taken from this prototype confirm the operation of the proposed current controller. Finally, initial EMI-measurements of the system are also presented.",
"title": ""
},
{
"docid": "bbfdc30b412df84861e242d4305ca20d",
"text": "OBJECTIVES\nLocal anesthetic injection into the interspace between the popliteal artery and the posterior capsule of the knee (IPACK) has the potential to provide motor-sparing analgesia to the posterior knee after total knee arthroplasty. The primary objective of this cadaveric study was to evaluate injectate spread to relevant anatomic structures with IPACK injection.\n\n\nMETHODS\nAfter receipt of Institutional Review Board Biospecimen Subcommittee approval, IPACK injection was performed on fresh-frozen cadavers. The popliteal fossa in each specimen was dissected and examined for injectate spread.\n\n\nRESULTS\nTen fresh-frozen cadaver knees were included in the study. Injectate was observed to spread in the popliteal fossa at a mean ± SD of 6.1 ± 0.7 cm in the medial-lateral dimension and 10.1 ± 3.2 cm in the proximal-distal dimension. No injectate was noted to be in contact with the proximal segment of the sciatic nerve, but 3 specimens showed injectate spread to the tibial nerve. In 3 specimens, the injectate showed possible contact with the common peroneal nerve. The middle genicular artery was consistently surrounded by injectate.\n\n\nCONCLUSIONS\nThis cadaver study of IPACK injection demonstrated spread throughout the popliteal fossa without proximal sciatic involvement. However, the potential for injectate to spread to the tibial or common peroneal nerve was demonstrated. Consistent surrounding of the middle genicular artery with injectate suggests a potential mechanism of analgesia for the IPACK block, due to the predictable relationship between articular sensory nerves and this artery. Further study is needed to determine the ideal site of IPACK injection.",
"title": ""
},
{
"docid": "daa7773486701deab7b0c69e1205a1d9",
"text": "Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.",
"title": ""
}
] |
scidocsrr
|
1aa1736b8bed1b6c5a1f950ddd3b2365
|
Towards consistent visual-inertial navigation
|
[
{
"docid": "cc63fa999bed5abf05a465ae7313c053",
"text": "In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of vision-based state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments.",
"title": ""
}
] |
[
{
"docid": "7aec5d9476ed1bd9452a348f5e2a9147",
"text": "Emerging nonvolatile memory (NVM) technologies, such as resistive random access memories (RRAM) and phase-change memories (PCM), are an attractive option for future memory architectures due to their nonvolatility, high density, and low-power operation. Notwithstanding these advantages, they are prone to high defect densities due to the nondeterministic nature of the nanoscale fabrication. We examine the fault models and propose an efficient testing technique to test crossbar-based NVMs. The typical approach to testing memories entails testing one memory element at a time. This is time consuming and does not scale for the dense, RRAM or PCM-based memories. We propose a testing scheme based on “sneak-path sensing” to efficiently detect faults in the memory. The testing scheme uses sneak paths inherent in crossbar memories, to test multiple memory elements at the same time, thereby reducing testing time. We designed the design-for-test support necessary to control the number of sneak paths that are concurrently enabled; this helps control the power consumed during test. The proposed scheme enables and leverages sneak paths during test mode, while still maintaining a sneak path free crossbar during normal operation.",
"title": ""
},
{
"docid": "203ae6dee1000e83dbce325c14539365",
"text": "In this paper, the usefulness of several topologies of DC-DC converters for measuring the characteristic curves of photovoltaic (PV) modules is theoretically analyzed. Eight topologies of DC-DC converters with step-down/step-up conversion relation (buck-boost single inductor, CSC (canonical switching cell), Cuk, SEPIC (single-ended primary inductance converter), zeta, flyback, boost-buck-cascaded, and buck-boost-cascaded converters) are compared and evaluated. This application is based on the property of these converters for emulating a resistor when operating in continuous conduction mode. Therefore, they are suitable to implement a system capable of measuring the I-V curve of PV modules. Other properties have been taken into account: input ripple, devices stress, size of magnetic components and input-output isolation. The study determines that SEPIC and Cuk converters are the most suitable for this application mainly due to the low input current ripple, allow input-output insulation and can be connected in parallel in order to measure PV modules o arrays with greater power. CSC topology is also suitable because it uses fewer components but of a larger size. Experimental results validate the comparative analysis.",
"title": ""
},
{
"docid": "8acd410ff0757423d09928093e7e8f63",
"text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .",
"title": ""
},
{
"docid": "9dac90ed6c1a89fc1f12d7ba581d4889",
"text": "BACKGROUND\nAccurate measurement of core temperature is a standard component of perioperative and intensive care patient management. However, core temperature measurements are difficult to obtain in awake patients. A new non-invasive thermometer has been developed, combining two sensors separated by a known thermal resistance ('double-sensor' thermometer). We thus evaluated the accuracy of the double-sensor thermometer compared with a distal oesophageal thermometer to determine if the double-sensor thermometer is a suitable substitute.\n\n\nMETHODS\nIn perioperative and intensive care patient populations (n=68 total), double-sensor measurements were compared with measurements from a distal oesophageal thermometer using Bland-Altman analysis and Lin's concordance correlation coefficient (CCC).\n\n\nRESULTS\nOverall, 1287 measurement pairs were obtained at 5 min intervals. Ninety-eight per cent of all double-sensor values were within +/-0.5 degrees C of oesophageal temperature. The mean bias between the methods was -0.08 degrees C; the limits of agreement were -0.66 degrees C to 0.50 degrees C. Sensitivity and specificity for detection of fever were 0.86 and 0.97, respectively. Sensitivity and specificity for detection of hypothermia were 0.77 and 0.93, respectively. Lin's CCC was 0.93.\n\n\nCONCLUSIONS\nThe new double-sensor thermometer is sufficiently accurate to be considered an alternative to distal oesophageal core temperature measurement, and may be particularly useful in patients undergoing regional anaesthesia.",
"title": ""
},
{
"docid": "ff0d818dfd07033fb5eef453ba933914",
"text": "Hyperplastic placentas have been reported in several experimental mouse models, including animals produced by somatic cell nuclear transfer, by inter(sub)species hybridization, and by somatic cytoplasm introduction to oocytes followed by intracytoplasmic sperm injection. Of great interest are the gross and histological features common to these placental phenotypes--despite their quite different etiologies--such as the enlargement of the spongiotrophoblast layers. To find morphological clues to the pathways leading to these similar placental phenotypes, we analyzed the ultrastructure of the three different types of hyperplastic placenta. Most cells affected were of trophoblast origin and their subcellular ultrastructural lesions were common to the three groups, e.g., a heavy accumulation of cytoplasmic vacuoles in the trophoblastic cells composing the labyrinthine wall and an increased volume of spongiotrophoblastic cells with extraordinarily dilatated rough endoplasmic reticulum. Although the numbers of trophoblastic glycogen cells were greatly increased, they maintained their normal ultrastructural morphology, including a heavy glycogen deposition throughout the cytoplasm. The fetal endothelium and small vessels were nearly intact. Our ultrastructural study suggests that these three types of placental hyperplasias, with different etiologies, may have common pathological pathways, which probably exclusively affect the development of certain cell types of the trophoblastic lineage during mouse placentation.",
"title": ""
},
{
"docid": "b8dbd71ff09f2e07a523532a65f690c7",
"text": "OBJECTIVE\nTo assess whether adolescent obesity is associated with risk for development of major depressive disorder (MDD) or anxiety disorder. Obesity has been linked to psychosocial difficulties among youth.\n\n\nMETHODS\nAnalysis of a prospective community-based cohort originally from upstate New York, assessed four times over 20 years. Participants (n = 776) were 9 to 18 years old in 1983; subsequent assessments took place in 1985 to 1986 (n = 775), 1991 to 1994 (n = 776), and 2001 to 2003 (n = 661). Using Cox proportional hazards analysis, we evaluated the association of adolescent (age range, 12-17.99 years) weight status with risk for subsequent MDD or anxiety disorder (assessed at each wave by structured diagnostic interviews) in males and females. A total of 701 participants were not missing data on adolescent weight status and had > or = 1 subsequent assessments. MDD and anxiety disorder analyses included 674 and 559 participants (free of current or previous MDD or anxiety disorder), respectively. Adolescent obesity was defined as body mass index above the age- and gender-specific 95th percentile of the Centers for Disease Control and Prevention growth reference.\n\n\nRESULTS\nAdolescent obesity in females predicted an increased risk for subsequent MDD (adjusted hazard ratio (HR) = 3.9; 95% confidence interval (CI) = 1.3, 11.8) and for anxiety disorder (HR = 3.8; CI = 1.3, 11.3). Adolescent obesity in males was not statistically significantly associated with risk for MDD (HR = 1.5; CI = 0.5, 3.5) or anxiety disorder (HR = 0.7; CI = 0.2, 2.9).\n\n\nCONCLUSION\nFemales obese as adolescents may be at increased risk for development of depression or anxiety disorders.",
"title": ""
},
{
"docid": "5100ef5ffa501eb7193510179039cd82",
"text": "The interplay between caching and HTTP Adaptive Streaming (HAS) is known to be intricate, and possibly detrimental to QoE. In this paper, we make the case for caching-aware rate decision algorithms at the client side which do not require any collaboration with cache or server. To this goal, we introduce the optimization model which allows to compute the optimal rate decisions in the presence of cache, and compare the current main representatives of HAS algorithms (RBA and BBA) to this optimal. This allows us to assess how far from the optimal these versions are, and on which to build a caching-aware rate decision algorithm.",
"title": ""
},
{
"docid": "33b0347afbf3c15d713c0c8b1ffab1ca",
"text": "Modern models of event extraction for tasks like ACE are based on supervised learning of events from small hand-labeled data. However, hand-labeled training data is expensive to produce, in low coverage of event types, and limited in size, which makes supervised methods hard to extract large scale of events for knowledge base population. To solve the data labeling problem, we propose to automatically label training data for event extraction via world knowledge and linguistic knowledge, which can detect key arguments and trigger words for each event type and employ them to label events in texts automatically. The experimental results show that the quality of our large scale automatically labeled data is competitive with elaborately human-labeled data. And our automatically labeled data can incorporate with human-labeled data, then improve the performance of models learned from these data.",
"title": ""
},
{
"docid": "b68680f47f1d9b45e30262ab45f0027b",
"text": "Brain-computer interface (BCI) systems create a novel communication channel from the brain to an output device by bypassing conventional motor output pathways of nerves and muscles. Therefore they could provide a new communication and control option for paralyzed patients. Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. Here we present a novel technique that allows the simultaneous optimization of a spatial and a spectral filter enhancing discriminability rates of multichannel EEG single-trials. The evaluation of 60 experiments involving 22 different subjects demonstrates the significant superiority of the proposed algorithm over to its classical counterpart: the median classification error rate was decreased by 11%. Apart from the enhanced classification, the spatial and/or the spectral filter that are determined by the algorithm can also be used for further analysis of the data, e.g., for source localization of the respective brain rhythms",
"title": ""
},
{
"docid": "74ef26e332b12329d8d83f80169de5c0",
"text": "It has been claimed that the discovery of association rules is well-suited for applications of market basket analysis to reveal regularities in the purchase behaviour of customers. Moreover, recent work indicates that the discovery of interesting rules can in fact only be addressed within a microeconomic framework. This study integrates the discovery of frequent itemsets with a (microeconomic) model for product selection (PROFSET). The model enables the integration of both quantitative and qualitative (domain knowledge) criteria. Sales transaction data from a fullyautomated convenience store is used to demonstrate the effectiveness of the model against a heuristic for product selection based on product-specific profitability. We show that with the use of frequent itemsets we are able to identify the cross-sales potential of product items and use this information for better product selection. Furthermore, we demonstrate that the impact of product assortment decisions on overall assortment profitability can easily be evaluated by means of sensitivity analysis.",
"title": ""
},
{
"docid": "444f26f3c1ae4b574d8007f93fc80d3d",
"text": "User experience (UX) research has expanded our notion of what makes interactive technology good, often putting hedonic aspects of use such as fun, affect, and stimulation at the center. Outside of UX, the hedonic is often contrasted to the eudaimonic, the notion of striving towards one's personal best. It remains unclear, however, what this distinction offers to UX research conceptually and empirically. We investigate a possible role for eudaimonia in UX research by empirically examining 266 reports of positive experiences with technology and analyzing its relation to established UX concepts. Compared to hedonic experiences, eudaimonic experiences were about striving towards and accomplishing personal goals through technology use. They were also characterized by increased need fulfillment, positive affect, meaning, and long-term importance. Taken together, our findings suggest that while hedonic UX is about momentary pleasures directly derived from technology use, eudaimonic UX is about meaning from need fulfilment.",
"title": ""
},
{
"docid": "4f296caa2ee4621a8e0858bfba701a3b",
"text": "This paper considers the problem of assessing visual aesthetic quality with semantic information. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition offers the key to addressing this problem. Based on convolutional neural networks, we propose a general multi-task framework with four different structures. In each structure, aesthetic quality assessment task and semantic recognition task are leveraged, and different features are explored to improve the quality assessment. Moreover, an effective strategy of keeping a balanced effect between the semantic task and aesthetic task is developed to optimize the parameters of our framework. The correlation analysis among the tasks validates the importance of the semantic recognition in aesthetic quality assessment. Extensive experiments verify the effectiveness of the proposed multi-task framework, and further corroborate the",
"title": ""
},
{
"docid": "2253d4fcef5289578595d6c72db3a905",
"text": "Estimation of efficiency of firms in a non-competit ive market characterized by heterogeneous inputs and outputs along with their varying prices is questionable when factor-based technology sets are used in data envelopment analysis (DEA). In thi s scenario, a value-based technology becomes an appropriate reference technology against which e fficiency can be assessed. In this contribution, the value-based models of Tone (2002) are extended in a directional DEA set up to develop new directional costand revenue-based measures of eff iciency, which are then decomposed into their respective directional value-based technical and al loc tive efficiencies. These new directional value-based measures are more general, and include the xisting value-based measures as special cases. These measures satisfy several desirable pro p rties of an ideal efficiency measure. These new measures are advantageous over the existing ones in t rms of 1) their ability to satisfy the most important property of translation invariance; 2) ch oi es over the use of suitable direction vectors in handling negative data; and 3) flexibility in provi ding the decision makers with the option of specifying preferable direction vectors to incorpor ate their preferences. Finally, under the condition of no prior unit price information, a directional v alue-based measure of profit inefficiency is developed for firms whose underlying objectives are p ofit maximization. For an illustrative empirical application, our new measures are applied to a real-life data set of 50 US banks to draw inferences about the production correspondence of b anking industry.",
"title": ""
},
{
"docid": "4ca04d9a84555894f8cf2834ffafd310",
"text": "T he Economist recently reported that infrastructure spending is the largest it is ever been as a share of world GDP. With $22 trillion in projected investments over the next ten years in emerging economies alone, the magazine calls it the “biggest investment boom in history.” The efficiency of infrastructure planning and execution is therefore particularly important at present. Unfortunately, the private sector, the public sector, and private/public sector partnerships have a dismal record of delivering on large infrastructure cost and performance promises. Consider the following typical examples.",
"title": ""
},
{
"docid": "2419e2750787b1ba2f00d1629e3bbdad",
"text": "Resilient transportation systems enable quick evacuation, rescue, distribution of relief supplies, and other activities for reducing the impact of natural disasters and for accelerating the recovery from them. The resilience of a transportation system largely relies on the decisions made during a natural disaster. We developed an agent-based traffic simulator for predicting the results of potential actions taken with respect to the transportation system to quickly make appropriate decisions. For realistic simulation, we govern the behavior of individual drivers of vehicles with foundational principles learned from probe-car data. For example, we used the probe-car data to estimate the personality of individual drivers of vehicles in selecting their routes, taking into account various metrics of routes such as travel time, travel distance, and the number of turns. This behavioral model, which was constructed from actual data, constitutes a special feature of our simulator. We built this simulator using the X10 language, which enables massively parallel execution for simulating traffic in a large metropolitan area. We report the use cases of the simulator in three major cities in the context of disaster recovery and resilient transportation.",
"title": ""
},
{
"docid": "00ac09dab67200f6b9df78a480d6dbd8",
"text": "In this paper, a new three-phase current-fed push-pull DC-DC converter is proposed. This converter uses a high-frequency three-phase transformer that provides galvanic isolation between the power source and the load. The three active switches are connected to the same reference, which simplifies the gate drive circuitry. Reduction of the input current ripple and the output voltage ripple is achieved by means of an inductor and a capacitor, whose volumes are smaller than in equivalent single-phase topologies. The three-phase DC-DC conversion also helps in loss distribution, allowing the use of lower cost switches. These characteristics make this converter suitable for applications where low-voltage power sources are used and the associated currents are high, such as in fuel cells, photovoltaic arrays, and batteries. The theoretical analysis, a simplified design example, and the experimental results for a 1-kW prototype will be presented for two operation regions. The prototype was designed for a switching frequency of 40 kHz, an input voltage of 120 V, and an output voltage of 400 V.",
"title": ""
},
{
"docid": "815098e9ed06dfa5335f0c2c595f4059",
"text": "Effectively managing risk is an essential element of successful project management. It is imperative that project management team consider all possible risks to establish corrective actions in the right time. So far, several techniques have been proposed for project risk analysis. Failure Mode and Effect Analysis (FMEA) is recognized as one of the most useful techniques in this field. The main goal is identifying all failure modes within a system, assessing their impact, and planning for corrective actions. In traditional FMEA, the risk priorities of failure modes are determined by using Risk Priority Numbers (RPN), which can be obtained by multiplying the scores of risk factors like occurrence (O), severity (S), and detection (D). This technique has some limitations, though in this paper, Fuzzy logic and Analytical Hierarchy Process (AHP) are used to address the limitations of traditional FMEA. Linguistic variables, expressed in fuzzy numbers, are used to assess the ratings of risk factors O, S, and D. Each factor consists of seven membership functions and on the whole there are 343 rules for fuzzy system. The analytic hierarchy process (AHP) is applied to determine the relative weightings of risk impacts on time, cost, quality and safety. A case study is presented to validate the concept. The feedbacks are showing the advantages of the proposed approach in project risk management.",
"title": ""
},
{
"docid": "240d47115c8bbf98e15ca4acae13ee62",
"text": "A trusted and active community aided and supported by the Internet of Things (IoT) is a key factor in food waste reduction and management. This paper proposes an IoT based context aware framework which can capture real-time dynamic requirements of both vendors and consumers and perform real-time match-making based on captured data. We describe our proposed reference framework and the notion of smart food sharing containers as enabling technology in our framework. A prototype system demonstrates the feasibility of a proposed approach using a smart container with embedded sensors.",
"title": ""
},
{
"docid": "4b1a02a1921a33a8c2f4d01670174f77",
"text": "In this paper we propose an approach for articulated tracking of multiple people in unconstrained videos. Our starting point is a model that resembles existing architectures for single-frame pose estimation but is several orders of magnitude faster. We achieve this in two ways: (1) by simplifying and sparsifying the body-part relationship graph and leveraging recent methods for faster inference, and (2) by offloading a substantial share of computation onto a feed-forward convolutional architecture that is able to detect and associate body joints of the same person even in clutter. We use this model to generate proposals for body joint locations and formulate articulated tracking as spatio-temporal grouping of such proposals. This allows to jointly solve the association problem for all people in the scene by propagating evidence from strong detections through time and enforcing constraints that each proposal can be assigned to one person only. We report results on a public MPII Human Pose benchmark and on a new dataset of videos with multiple people. We demonstrate that our model achieves state-of-the-art results while using only a fraction of time and is able to leverage temporal information to improve state-of-the-art for crowded scenes1.",
"title": ""
},
{
"docid": "095f8d5c3191d6b70b2647b562887aeb",
"text": "Hardware specialization, in the form of datapath and control circuitry customized to particular algorithms or applications, promises impressive performance and energy advantages compared to traditional architectures. Current research in accelerators relies on RTL-based synthesis flows to produce accurate timing, power, and area estimates. Such techniques not only require significant effort and expertise but also are slow and tedious to use, making large design space exploration infeasible. To overcome this problem, the authors developed Aladdin, a pre-RTL, power-performance accelerator modeling framework and demonstrated its application to system-on-chip (SoC) simulation. Aladdin estimates performance, power, and area of accelerators within 0.9, 4.9, and 6.6 percent with respect to RTL implementations. Integrated with architecture-level general-purpose core and memory hierarchy simulators, Aladdin provides researchers with a fast but accurate way to model the power and performance of accelerators in an SoC environment.",
"title": ""
}
] |
scidocsrr
|
1bb7bc2568ca25431e1c081234350a9d
|
Learning a time-dependent master saliency map from eye-tracking data in videos
|
[
{
"docid": "825b567c1a08d769aa334b707176f607",
"text": "A critical function in both machine vision and biological vision systems is attentional selection of scene regions worthy of further analysis by higher-level processes such as object recognition. Here we present the first model of spatial attention that (1) can be applied to arbitrary static and dynamic image sequences with interactive tasks and (2) combines a general computational implementation of both bottom-up (BU) saliency and dynamic top-down (TD) task relevance; the claimed novelty lies in the combination of these elements and in the fully computational nature of the model. The BU component computes a saliency map from 12 low-level multi-scale visual features. The TD component computes a low-level signature of the entire image, and learns to associate different classes of signatures with the different gaze patterns recorded from human subjects performing a task of interest. We measured the ability of this model to predict the eye movements of people playing contemporary video games. We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a combined BU*TD model performs significantly better than either individual component. Qualitatively, the combined model predicts some easy-to-describe but hard-to-compute aspects of attentional selection, such as shifting attention leftward when approaching a left turn along a racing track. Thus, our study demonstrates the advantages of integrating BU factors derived from a saliency map and TD factors learned from image and task contexts in predicting where humans look while performing complex visually-guided behavior.",
"title": ""
},
{
"docid": "de1165d7ca962c5bbd141d571e50dbd3",
"text": "A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in the primate visual cortex. It is further shown that the proposed saliency measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.",
"title": ""
},
{
"docid": "97c5b202cdc1f7d8220bf83663a0668f",
"text": "Despite significant recent progress, the best available visual saliency models still lag behind human performance in predicting eye fixations in free-viewing of natural scenes. Majority of models are based on low-level visual features and the importance of top-down factors has not yet been fully explored or modeled. Here, we combine low-level features such as orientation, color, intensity, saliency maps of previous best bottom-up models with top-down cognitive visual features (e.g., faces, humans, cars, etc.) and learn a direct mapping from those features to eye fixations using Regression, SVM, and AdaBoost classifiers. By extensive experimenting over three benchmark eye-tracking datasets using three popular evaluation scores, we show that our boosting model outperforms 27 state-of-the-art models and is so far the closest model to the accuracy of human model for fixation prediction. Furthermore, our model successfully detects the most salient object in a scene without sophisticated image processings such as region segmentation.",
"title": ""
},
{
"docid": "dd2267e380de2bc5ef71ee7ffd2eb00a",
"text": "We propose a formal Bayesian definition of surprise to capture subjective aspects of sensory information. Surprise measures how data affects an observer, in terms of differences between posterior and prior beliefs about the world. Only data observations which substantially affect the observer's beliefs yield surprise, irrespectively of how rare or informative in Shannon's sense these observations are. We test the framework by quantifying the extent to which humans may orient attention and gaze towards surprising events or items while watching television. To this end, we implement a simple computational model where a low-level, sensory form of surprise is computed by simple simulated early visual neurons. Bayesian surprise is a strong attractor of human attention, with 72% of all gaze shifts directed towards locations more surprising than the average, a figure rising to 84% when focusing the analysis onto regions simultaneously selected by all observers. The proposed theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction.",
"title": ""
}
] |
[
{
"docid": "d16693b6b6f95105321508c114154edc",
"text": "Classification of hyperspectral image (HSI) is an important research topic in the remote sensing community. Significant efforts (e.g., deep learning) have been concentrated on this task. However, it is still an open issue to classify the high-dimensional HSI with a limited number of training samples. In this paper, we propose a semi-supervised HSI classification method inspired by the generative adversarial networks (GANs). Unlike the supervised methods, the proposed HSI classification method is semi-supervised, which can make full use of the limited labeled samples as well as the sufficient unlabeled samples. Core ideas of the proposed method are twofold. First, the three-dimensional bilateral filter (3DBF) is adopted to extract the spectral-spatial features by naturally treating the HSI as a volumetric dataset. The spatial information is integrated into the extracted features by 3DBF, which is propitious to the subsequent classification step. Second, GANs are trained on the spectral-spatial features for semi-supervised learning. A GAN contains two neural networks (i.e., generator and discriminator) trained in opposition to one another. The semi-supervised learning is achieved by adding samples from the generator to the features and increasing the dimension of the classifier output. Experimental results obtained on three benchmark HSI datasets have confirmed the effectiveness of the proposed method, especially with a limited number of labeled samples.",
"title": ""
},
{
"docid": "bd3a2546d9f91f224e76759c087a7a1e",
"text": "In this paper, we present a practical relay attack that can be mounted on RFID systems found in many applications nowadays. The described attack uses a self-designed proxy device to forward the RF communication from a reader to a modern NFC-enabled smart phone (Google Nexus S). The phone acts as a mole to inquire a victim’s card in the vicinity of the system. As a practical demonstration of our attack, we target a widely used accesscontrol application that usually grants access to office buildings using a strong AES authentication feature. Our attack successfully relays this authentication process via a Bluetooth channel (> 50 meters) within several hundred milliseconds. As a result, we were able to impersonate an authorized user and to enter the building without being detected.",
"title": ""
},
{
"docid": "6ba7f7390490da05cca6c4ab4d9d9fab",
"text": "Object detection and localization is a challenging task. Among several approaches, more recently hierarchical methods of feature-based object recognition have been developed and demonstrated high-end performance measures. Inspired by the knowledge about the architecture and function of the primate visual system, the computational HMAX model has been proposed. At the same time robust visual object recognition was proposed using feature distributions, e.g. histograms of oriented gradients (HOGs). Since both models build upon an edge representation of the input image, the question arises, whether one kind of approach might be superior to the other. Introducing a new biologically inspired attention steered processing framework, we demonstrate that the combination of both approaches gains the best results.",
"title": ""
},
{
"docid": "6f7c81d869b4389d5b84e80b4c306381",
"text": "Environmental, genetic, and immune factors are at play in the development of the variable clinical manifestations of Graves' ophthalmopathy (GO). Among the environmental contributions, smoking is the risk factor most consistently linked to the development or worsening of the disease. The close temporal relationship between the diagnoses of Graves' hyperthyroidism and GO have long suggested that these 2 autoimmune conditions may share pathophysiologic features. The finding that the thyrotropin receptor (TSHR) is expressed in orbital fibroblasts, the target cells in GO, supported the notion of a common autoantigen. Both cellular and humeral immunity directed against TSHR expressed on orbital fibroblasts likely initiate the disease process. Activation of helper T cells recognizing TSHR peptides and ligation of TSHR by TRAb lead to the secretion of inflammatory cytokines and chemokines, and enhanced hyaluronic acid (HA) production and adipogenesis. The resulting connective tissue remodeling results in varying degrees extraocular muscle enlargement and orbital fat expansion. A subset of orbital fibroblasts express CD34, are bone-marrow derived, and circulate as fibrocytes that infiltrate connective tissues at sites of injury or inflammation. As these express high levels of TSHR and are capable of producing copious cytokines and chemokines, they may represent an orbital fibroblast population that plays a central role in GO development. In addition to TSHR, orbital fibroblasts from patients with GO express high levels of IGF-1R. Recent studies suggest that these receptors engage in cross-talk induced by TSHR ligation to synergistically enhance TSHR signaling, HA production, and the secretion of inflammatory mediators.",
"title": ""
},
{
"docid": "1e1355e7fbe185c2e69083fe8df2d875",
"text": "The problem of reproducing high dynamic range images on devices with restricted dynamic range has gained a lot of interest in the computer graphics community. There exist various approaches to this issue, which span several research areas including computer graphics, image processing, color vision, physiological aspects, etc. These approaches assume a thorough knowledge of both the objective and subjective attributes of an image. However, no comprehensive overview and analysis of such attributes has been published so far. In this contribution, we present an overview about the effects of basic image attributes in HDR tone mapping. Furthermore, we propose a scheme of relationships between these attributes, leading to the definition of an overall image quality measure. We present results of subjective psychophysical experiments that we have performed to prove the proposed relationship scheme. Moreover, we also present an evaluation of existing tone mapping methods (operators) with regard to these attributes. Finally, the execution of with-reference and without a real reference perceptual experiments gave us the opportunity to relate the obtained subjective results. Our effort is not just useful to get into the tone mapping field or when implementing a tone mapping method, but it also sets the stage for well-founded quality comparisons between tone mapping methods. By providing good definitions of the different attributes, user-driven or fully automatic comparisons are made possible.",
"title": ""
},
{
"docid": "7150d210ad78110897c3b3f5078c935b",
"text": "Resolution in Magnetic Resonance (MR) is limited by diverse physical, technological and economical considerations. In conventional medical practice, resolution enhancement is usually performed with bicubic or B-spline interpolations, strongly affecting the accuracy of subsequent processing steps such as segmentation or registration. This paper presents a sparse-based super-resolution method, adapted for easily including prior knowledge, which couples up high and low frequency information so that a high-resolution version of a low-resolution brain MR image is generated. The proposed approach includes a whole-image multi-scale edge analysis and a dimensionality reduction scheme, which results in a remarkable improvement of the computational speed and accuracy, taking nearly 26 min to generate a complete 3D high-resolution reconstruction. The method was validated by comparing interpolated and reconstructed versions of 29 MR brain volumes with the original images, acquired in a 3T scanner, obtaining a reduction of 70% in the root mean squared error, an increment of 10.3 dB in the peak signal-to-noise ratio, and an agreement of 85% in the binary gray matter segmentations. The proposed method is shown to outperform a recent state-of-the-art algorithm, suggesting a substantial impact in voxel-based morphometry studies.",
"title": ""
},
{
"docid": "d7ac0414b269202015d29ddaaa4bd436",
"text": "Mobile manipulation tasks in shopfloor logistics require robots to grasp objects from various transport containers such as boxes and pallets. In this paper, we present an efficient processing pipeline that detects and localizes boxes and pallets in RGB-D images. Our method is based on edges in both the color image and the depth image and uses a RANSAC approach for reliably localizing the detected containers. Experiments show that the proposed method reliably detects and localizes both container types while guaranteeing low processing times.",
"title": ""
},
{
"docid": "b72f4554f2d7ac6c5a8000d36a099e67",
"text": "Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T1. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.",
"title": ""
},
{
"docid": "6574a8b000e2f08f8b8da323e992d559",
"text": "The rapid advances of transportation infrastructure have led to a dramatic increase in the demand for smart systems capable of monitoring traffic and street safety. Fundamental to these applications are a community-based evaluation platform and benchmark for object detection and multiobject tracking. To this end, we organize the AVSS2017 Challenge on Advance Traffic Monitoring, in conjunction with the International Workshop on Traffic and Street Surveillance for Safety and Security (IWT4S), to evaluate the state-of-the-art object detection and multi-object tracking algorithms in the relevance of traffic surveillance. Submitted algorithms are evaluated using the large-scale UA-DETRAC benchmark and evaluation protocol. The benchmark, the evaluation toolkit and the algorithm performance are publicly available from the website http: //detrac-db.rit.albany.edu.",
"title": ""
},
{
"docid": "a2bb54cd5df70c68441da823f90bece1",
"text": "This paper describes the development of innovative low-cost home dedicated fire alert detection system (FADS) using ZigBee wireless network. Our home FADS system are consists of an Arduino Uno Microcontroller, Xbee wireless module (Zigbee wireless), Arduino digital temperature sensor, buzzer alarm and X-CTU software. Arduino and wireless ZigBee has advantages in terms of its long battery life and much cheaper compared to the others wireless sensor network. There are several objectives that need to be accomplished in developing this project which are to develop fire alert detection system (FADS) for home user using ZigBee wireless network and to evaluate the effectiveness of the home FADS by testing it in different distances and the functionality of heat sensor. Based from the experiments, the results show that the home FADS could function as expected. It also could detect heat and alarm triggered when temperature is above particular value. Furthermore, this project provides a guideline for implementing and applying home FADS at home and recommendation on future studies for home FADS in monitoring the temperature on the web server.",
"title": ""
},
{
"docid": "7f47a4b5152acf7e38d5c39add680f9d",
"text": "unit of computation and a processor a piece of physical hardware In addition to reading to and writing from local memory a process can send and receive messages by making calls to a library of message passing routines The coordinated exchange of messages has the e ect of synchronizing processes This can be achieved by the synchronous exchange of messages in which the sending operation does not terminate until the receive operation has begun A di erent form of synchronization occurs when a message is sent asynchronously but the receiving process must wait or block until the data arrives Processes can be mapped to physical processors in various ways the mapping employed does not a ect the semantics of a program In particular multiple processes may be mapped to a single processor The message passing model provides a mechanism for talking about locality data contained in the local memory of a process are close and other data are remote We now examine some other properties of the message passing programming model performance mapping independence and modularity",
"title": ""
},
{
"docid": "87ae6c0b8bd90bde0cb4876352e222b4",
"text": "This study examined the developmental trajectories of three frequently postulated executive function (EF) components, Working Memory, Shifting, and Inhibition of responses, and their relation to performance on standard, but complex, neuropsychological EF tasks, the Wisconsin Card Sorting Task (WCST), and the Tower of London (ToL). Participants in four age groups (7-, 11-, 15-, and 21-year olds) carried out nine basic experimental tasks (three tasks for each EF), the WCST, and the ToL. Analyses were done in two steps: (1) analyses of (co)variance to examine developmental trends in individual EF tasks while correcting for basic processing speed, (2) confirmatory factor analysis to extract latent variables from the nine basic EF tasks, and to explain variance in the performance on WCST and ToL, using these latent variables. Analyses of (co)variance revealed a continuation of EF development into adolescence. Confirmatory factor analysis yielded two common factors: Working Memory and Shifting. However, the variables assumed to tap Inhibition proved unrelated. At a latent level, again correcting for basic processing speed, the development of Shifting was seen to continue into adolescence, while Working Memory continued to develop into young-adulthood. Regression analyses revealed that Working Memory contributed most strongly to WCST performance in all age groups. These results suggest that EF component processes develop at different rates, and that it is important to recognize both the unity and diversity of EF component processes in studying the development of EF.",
"title": ""
},
{
"docid": "2802d66dfa1956bf83649614b76d470e",
"text": "Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work best. In particular, we begin with the baseline of random presentation and then examine combinations of several mechanisms: the indication of an example’s relative difficulty, the use of the shaping heuristic from the cognitive science literature (moving from easier examples to harder ones), and a novel kernel-based “coverage model” of the subject’s mastery of the task. From our experiments on 54 human subjects learning and performing a pair of synthetic classification tasks via our teaching system, we found that we can achieve the greatest gains with a combination of shaping and the coverage model.",
"title": ""
},
{
"docid": "de73e8e382dddfba867068f1099b86fb",
"text": "Endophytes are fungi which infect plants without causing symptoms. Fungi belonging to this group are ubiquitous, and plant species not associated to fungal endophytes are not known. In addition, there is a large biological diversity among endophytes, and it is not rare for some plant species to be hosts of more than one hundred different endophytic species. Different mechanisms of transmission, as well as symbiotic lifestyles occur among endophytic species. Latent pathogens seem to represent a relatively small proportion of endophytic assemblages, also composed by latent saprophytes and mutualistic species. Some endophytes are generalists, being able to infect a wide range of hosts, while others are specialists, limited to one or a few hosts. Endophytes are gaining attention as a subject for research and applications in Plant Pathology. This is because in some cases plants associated to endophytes have shown increased resistance to plant pathogens, particularly fungi and nematodes. Several possible mechanisms by which endophytes may interact with pathogens are discussed in this review. Additional key words: biocontrol, biodiversity, symbiosis.",
"title": ""
},
{
"docid": "3c812cad23bffaf36ad485dbd530e040",
"text": "Social tags are user-generated keywords associated with some resource on the Web. In the case of music, social tags have become an important component of “Web2.0” recommender systems, allowing users to generate playlists based on use-dependent terms such as chill or jogging that have been applied to particular songs. In this paper, we propose a method for predicting these social tags directly from MP3 files. Using a set of boosted classifiers, we map audio features onto social tags collected from the Web. The resulting automatic tags (or autotags) furnish information about music that is otherwise untagged or poorly tagged, allowing for insertion of previously unheard music into a social recommender. This avoids the ”cold-start problem” common in such systems. Autotags can also be used to smooth the tag space from which similarities and recommendations are made by providing a set of comparable baseline tags for all tracks in a recommender system.",
"title": ""
},
{
"docid": "1ad06e5eee4d4f29dd2f0e8f0dd62370",
"text": "Recent research on map matching algorithms for land vehicle navigation has been based on either a conventional topological analysis or a probabilistic approach. The input to these algorithms normally comes from the global positioning system and digital map data. Although the performance of some of these algorithms is good in relatively sparse road networks, they are not always reliable for complex roundabouts, merging or diverging sections of motorways and complex urban road networks. In high road density areas where the average distance between roads is less than 100 metres, there may be many road patterns matching the trajectory of the vehicle reported by the positioning system at any given moment. Consequently, it may be difficult to precisely identify the road on which the vehicle is travelling. Therefore, techniques for dealing with qualitative terms such as likeliness are essential for map matching algorithms to identify a correct link. Fuzzy logic is one technique that is an effective way to deal with qualitative terms, linguistic vagueness, and human intervention. This paper develops a map matching algorithm based on fuzzy logic theory. The inputs to the proposed algorithm are from the global positioning system augmented with data from deduced reckoning sensors to provide continuous navigation. The algorithm is tested on different road networks of varying complexity. The validation of this algorithm is carried out using high precision positioning data obtained from GPS carrier phase observables. The performance of the developed map matching algorithm is evaluated against the performance of several well-accepted existing map matching algorithms. The results show that the fuzzy logic-based map matching algorithm provides a significant improvement over existing map matching algorithms both in terms of identifying correct links and estimating the vehicle position on the links.",
"title": ""
},
{
"docid": "cbf32934e275e8d95a584762b270a5c2",
"text": "Online telemedicine systems are useful due to the possibility of timely and efficient healthcare services. These systems are based on advanced wireless and wearable sensor technologies. The rapid growth in technology has remarkably enhanced the scope of remote health monitoring systems. In this paper, a real-time heart monitoring system is developed considering the cost, ease of application, accuracy, and data security. The system is conceptualized to provide an interface between the doctor and the patients for two-way communication. The main purpose of this study is to facilitate the remote cardiac patients in getting latest healthcare services which might not be possible otherwise due to low doctor-to-patient ratio. The developed monitoring system is then evaluated for 40 individuals (aged between 18 and 66 years) using wearable sensors while holding an Android device (i.e., smartphone under supervision of the experts). The performance analysis shows that the proposed system is reliable and helpful due to high speed. The analyses showed that the proposed system is convenient and reliable and ensures data security at low cost. In addition, the developed system is equipped to generate warning messages to the doctor and patient under critical circumstances.",
"title": ""
},
{
"docid": "fc904f979f7b00941852ac9db66f7129",
"text": "The Orchidaceae are one of the most species-rich plant families and their floral diversity and pollination biology have long intrigued evolutionary biologists. About one-third of the estimated 18,500 species are thought to be pollinated by deceit. To date, the focus has been on how such pollination evolved, how the different types of deception work, and how it is maintained, but little progress has been made in understanding its evolutionary consequences. To address this issue, we discuss here how deception affects orchid mating systems, the evolution of reproductive isolation, speciation processes and neutral genetic divergence among species. We argue that pollination by deceit is one of the keys to orchid floral and species diversity. A better understanding of its evolutionary consequences could help evolutionary biologists to unravel the reasons for the evolutionary success of orchids.",
"title": ""
},
{
"docid": "89fff85bba64d7411948c2a09345093a",
"text": "Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature.",
"title": ""
},
{
"docid": "327450c9470de1254ecc209afcd8addb",
"text": "Intra-individual performance variability may be an important index of the efficiency with which executive control processes are implemented, Lesion studies suggest that damage to the frontal lobes is accompanied by an increase in such variability. Here we sought for the first time to investigate how the functional neuroanatomy of executive control is modulated by performance variability in healthy subjects by using an event-related functional magnetic resonance imaging (ER-fMRI) design and a Go/No-go response inhibition paradigm. Behavioural results revealed that individual differences in Go response time variability were a strong predictor of inhibitory success and that differences in mean Go response time could not account for this effect. Task-related brain activation was positively correlated with intra-individual variability within a distributed inhibitory network consisting of bilateral middle frontal areas and right inferior parietal and thalamic regions. Both the behavioural and fMRI data are consistent with the interpretation that those subjects with relatively higher intra-individual variability activate inhibitory regions to a greater extent, perhaps reflecting a greater requirement for top-down executive control in this group, a finding that may be relevant to disorders of executive/attentional control.",
"title": ""
}
] |
scidocsrr
|
888c8c24d4760426b1cad758776d0c47
|
Learning an Invariant Hilbert Space for Domain Adaptation
|
[
{
"docid": "cb2dd47932aa4709e2497fdb16b5e5f2",
"text": "In this paper, we propose an approach to the domain adaptation, dubbed Second-or Higher-order Transfer of Knowledge (So-HoT), based on the mixture of alignments of second-or higher-order scatter statistics between the source and target domains. The human ability to learn from few labeled samples is a recurring motivation in the literature for domain adaptation. Towards this end, we investigate the supervised target scenario for which few labeled target training samples per category exist. Specifically, we utilize two CNN streams: the source and target networks fused at the classifier level. Features from the fully connected layers fc7 of each network are used to compute second-or even higher-order scatter tensors, one per network stream per class. As the source and target distributions are somewhat different despite being related, we align the scatters of the two network streams of the same class (within-class scatters) to a desired degree with our bespoke loss while maintaining good separation of the between-class scatters. We train the entire network in end-to-end fashion. We provide evaluations on the standard Office benchmark (visual domains) and RGB-D combined with Caltech256 (depth-to-rgb transfer). We attain state-of-the-art results.",
"title": ""
}
] |
[
{
"docid": "d7310e830f85541aa1d4b94606c1be0c",
"text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.",
"title": ""
},
{
"docid": "d7a143bdb62e4aaeaf18b0aabe35588e",
"text": "BACKGROUND\nShort-acting insulin analogue use for people with diabetes is still controversial, as reflected in many scientific debates.\n\n\nOBJECTIVES\nTo assess the effects of short-acting insulin analogues versus regular human insulin in adults with type 1 diabetes.\n\n\nSEARCH METHODS\nWe carried out the electronic searches through Ovid simultaneously searching the following databases: Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R) (1946 to 14 April 2015), EMBASE (1988 to 2015, week 15), the Cochrane Central Register of Controlled Trials (CENTRAL; March 2015), ClinicalTrials.gov and the European (EU) Clinical Trials register (both March 2015).\n\n\nSELECTION CRITERIA\nWe included all randomised controlled trials with an intervention duration of at least 24 weeks that compared short-acting insulin analogues with regular human insulins in the treatment of adults with type 1 diabetes who were not pregnant.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted data and assessed trials for risk of bias, and resolved differences by consensus. We graded overall study quality using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) instrument. We used random-effects models for the main analyses and presented the results as odds ratios (OR) with 95% confidence intervals (CI) for dichotomous outcomes.\n\n\nMAIN RESULTS\nWe identified nine trials that fulfilled the inclusion criteria including 2693 participants. The duration of interventions ranged from 24 to 52 weeks with a mean of about 37 weeks. The participants showed some diversity, mainly with regard to diabetes duration and inclusion/exclusion criteria. The majority of the trials were carried out in the 1990s and participants were recruited from Europe, North America, Africa and Asia. None of the trials was carried out in a blinded manner so that the risk of performance bias, especially for subjective outcomes such as hypoglycaemia, was present in all of the trials. Furthermore, several trials showed inconsistencies in the reporting of methods and results.The mean difference (MD) in glycosylated haemoglobin A1c (HbA1c) was -0.15% (95% CI -0.2% to -0.1%; P value < 0.00001; 2608 participants; 9 trials; low quality evidence) in favour of insulin analogues. The comparison of the risk of severe hypoglycaemia between the two treatment groups showed an OR of 0.89 (95% CI 0.71 to 1.12; P value = 0.31; 2459 participants; 7 trials; very low quality evidence). For overall hypoglycaemia, also taking into account mild forms of hypoglycaemia, the data were generally of low quality, but also did not indicate substantial group differences. Regarding nocturnal severe hypoglycaemic episodes, two trials reported statistically significant effects in favour of the insulin analogue, insulin aspart. However, due to inconsistent reporting in publications and trial reports, the validity of the result remains questionable.We also found no clear evidence for a substantial effect of insulin analogues on health-related quality of life. However, there were few results only based on subgroups of the trial populations. None of the trials reported substantial effects regarding weight gain or any other adverse events. No trial was designed to investigate possible long-term effects (such as all-cause mortality, diabetic complications), in particular in people with diabetes related complications.\n\n\nAUTHORS' CONCLUSIONS\nOur analysis suggests only a minor benefit of short-acting insulin analogues on blood glucose control in people with type 1 diabetes. To make conclusions about the effect of short acting insulin analogues on long-term patient-relevant outcomes, long-term efficacy and safety data are needed.",
"title": ""
},
{
"docid": "49fed572de904ac3bb9aab9cdc874cc6",
"text": "Factorized Hidden Layer (FHL) adaptation has been proposed for speaker adaptation of deep neural network (DNN) based acoustic models. In FHL adaptation, a speaker-dependent (SD) transformation matrix and an SD bias are included in addition to the standard affine transformation. The SD transformation is a linear combination of rank-1 matrices whereas the SD bias is a linear combination of vectors. Recently, the Long ShortTerm Memory (LSTM) Recurrent Neural Networks (RNNs) have shown to outperform DNN acoustic models in many Automatic Speech Recognition (ASR) tasks. In this work, we investigate the effectiveness of SD transformations for LSTM-RNN acoustic models. Experimental results show that when combined with scaling of LSTM cell states’ outputs, SD transformations achieve 2.3% and 2.1% absolute improvements over the baseline LSTM systems for the AMI IHM and AMI SDM tasks respectively.",
"title": ""
},
{
"docid": "72ce1e7b2f5f4b7131e121630e86a5c7",
"text": "Schizophrenia is a chronic and severe mental illness that poses significant challenges. While many pharmacological and psychosocial interventions are available, many treatment-resistant schizophrenia patients continue to suffer from persistent psychotic symptoms, notably auditory verbal hallucinations (AVH), which are highly disabling. This unmet clinical need requires new innovative treatment options. Recently, a psychological therapy using computerized technology has shown large therapeutic effects on AVH severity by enabling patients to engage in a dialogue with a computerized representation of their voices. These very promising results have been extended by our team using immersive virtual reality (VR). Our study was a 7-week phase-II, randomized, partial cross-over trial. Nineteen schizophrenia patients with refractory AVH were recruited and randomly allocated to either VR-assisted therapy (VRT) or treatment-as-usual (TAU). The group allocated to TAU consisted of antipsychotic treatment and usual meetings with clinicians. The TAU group then received a delayed 7weeks of VRT. A follow-up was ensured 3months after the last VRT therapy session. Changes in psychiatric symptoms, before and after TAU or VRT, were assessed using a linear mixed-effects model. Our findings showed that VRT produced significant improvements in AVH severity, depressive symptoms and quality of life that lasted at the 3-month follow-up period. Consistent with previous research, our results suggest that VRT might be efficacious in reducing AVH related distress. The therapeutic effects of VRT on the distress associated with the voices were particularly prominent (d=1.2). VRT is a highly novel and promising intervention for refractory AVH in schizophrenia.",
"title": ""
},
{
"docid": "b18f98cfad913ebf3ce1780b666277cb",
"text": "Deep convolutional neural network (DCNN) has achieved remarkable performance on object detection and speech recognition in recent years. However, the excellent performance of a DCNN incurs high computational complexity and large memory requirement In this paper, an equal distance nonuniform quantization (ENQ) scheme and a K-means clustering nonuniform quantization (KNQ) scheme are proposed to reduce the required memory storage when low complexity hardware or software implementations are considered. For the VGG-16 and the AlexNet, the proposed nonuniform quantization schemes reduce the number of required memory storage by approximately 50% while achieving almost the same or even better classification accuracy compared to the state-of-the-art quantization method. Compared to the ENQ scheme, the proposed KNQ scheme provides a better tradeoff when higher accuracy is required.",
"title": ""
},
{
"docid": "2176518448c89ba977d849f71c86e6a6",
"text": "iii I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. _______________________________________ L. Peter Deutsch I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Abstract Object-oriented programming languages confer many benefits, including abstraction, which lets the programmer hide the details of an object's implementation from the object's clients. Unfortunately, crossing abstraction boundaries often incurs a substantial run-time overhead in the form of frequent procedure calls. Thus, pervasive use of abstraction , while desirable from a design standpoint, may be impractical when it leads to inefficient programs. Aggressive compiler optimizations can reduce the overhead of abstraction. However, the long compilation times introduced by optimizing compilers delay the programming environment's responses to changes in the program. Furthermore, optimization also conflicts with source-level debugging. Thus, programmers are caught on the horns of two dilemmas: they have to choose between abstraction and efficiency, and between responsive programming environments and efficiency. This dissertation shows how to reconcile these seemingly contradictory goals by performing optimizations lazily. Four new techniques work together to achieve high performance and high responsiveness: • Type feedback achieves high performance by allowing the compiler to inline message sends based on information extracted from the runtime system. On average, programs run 1.5 times faster than the previous SELF system; compared to a commercial Smalltalk implementation, two medium-sized benchmarks run about three times faster. This level of performance is obtained with a compiler that is both simpler and faster than previous SELF compilers. • Adaptive optimization achieves high responsiveness without sacrificing performance by using a fast non-optimizing compiler to generate initial code while automatically recompiling heavily used parts of the program with an optimizing compiler. On a previous-generation workstation like the SPARCstation-2, fewer than 200 pauses exceeded 200 ms during a 50-minute interaction, and 21 pauses exceeded one second. …",
"title": ""
},
{
"docid": "95045efce8527a68485915d8f9e2c6cf",
"text": "OBJECTIVES\nTo update the normal stretched penile length values for children younger than 5 years of age. We also evaluated the association between penile length and anthropometric measures such as body weight, height, and body mass index.\n\n\nMETHODS\nThe study was performed as a cross-section study. The stretched penile lengths of 1040 white uncircumcised male infants and children 0 to 5 years of age were measured, and the mean length for each age group and the rate of increase in penile length were calculated. The correlation between penile length and weight, height, and body mass index of the children was determined by Pearson analysis.\n\n\nRESULTS\nThe stretched penile length was 3.65 +/- 0.27 cm in full-term newborns (n = 165) and 3.95 +/- 0.35 cm in children 1 to 3 months old (n = 112), 4.26 +/- 0.40 cm in those 3.1 to 6 months old (n = 130), 4.65 +/- 0.47 cm in those 6.1 to 12 months old (n = 148), 4.82 +/- 0.44 cm in those 12.1 to 24 months old (n = 135), 5.15 +/- 0.46 cm in those 24.1 to 36 months old (n = 120), 5.58 +/- 0.47 cm in those 36.1 to 48 months old (n = 117), and 6.02 +/- 0.50 cm in those 48.1 to 60 months old (n = 113). The fastest rate of increase in penile length was seen in the first 6 months of age, with a value of 1 mm/mo. A significant correlation was found between penile length and the weight, height, and body mass index of the boys (r = 0.881, r = 0.864, and r = 0.173, respectively; P = 0.001).\n\n\nCONCLUSIONS\nThe age-related values of penile length must be known to be able to determine abnormal penile sizes and to monitor treatment of underlying diseases. Our study has provided updated reference values for penile lengths for Turkish and other white boys aged 0 to 5 years.",
"title": ""
},
{
"docid": "58164220c13b39eb5d2ca48139d45401",
"text": "There is general agreement that structural similarity — a match in relational structure — is crucial in analogical processing. However, theories differ in their definitions of structural similarity: in particular, in whether there must be conceptual similarity between the relations in the two domains or whether parallel graph structure is sufficient. In two studies, we demonstrate, first, that people draw analogical correspondences based on matches in conceptual relations, rather than on purely structural graph matches; and, second, that people draw analogical inferences between passages that have matching conceptual relations, but not between passages with purely structural graph matches.",
"title": ""
},
{
"docid": "a979b0a02f2ade809c825b256b3c69d8",
"text": "The objective of this review is to analyze in detail the microscopic structure and relations among muscular fibers, endomysium, perimysium, epimysium and deep fasciae. In particular, the multilayer organization and the collagen fiber orientation of these elements are reported. The endomysium, perimysium, epimysium and deep fasciae have not just a role of containment, limiting the expansion of the muscle with the disposition in concentric layers of the collagen tissue, but are fundamental elements for the transmission of muscular force, each one with a specific role. From this review it appears that the muscular fibers should not be studied as isolated elements, but as a complex inseparable from their fibrous components. The force expressed by a muscle depends not only on its anatomical structure, but also the angle at which its fibers are attached to the intramuscular connective tissue and the relation with the epimysium and deep fasciae.",
"title": ""
},
{
"docid": "af4d150e993258124ba0af211fa26841",
"text": "..................................................................................................................................................................... 3 RESUMÉ ........................................................................................................................................................................... 4 PREFACE ......................................................................................................................................................................... 5 TABLE OF CONTENTS ................................................................................................................................................. 6 INTRODUCTION ............................................................................................................................................................ 7 PROBLEM ANALYSIS................................................................................................................................................... 8 EXISTING METHODS ........................................................................................................................................................ 9 THE VIOLA-JONES FACE DETECTOR .................................................................................................................. 10 INTRODUCTION TO CHAPTER......................................................................................................................................... 10 METHODS ..................................................................................................................................................................... 10 The scale invariant detector .................................................................................................................................... 10 The modified AdaBoost algorithm........................................................................................................................... 12 The cascaded classifier ........................................................................................................................................... 13 IMPLEMENTATION & RESULTS...................................................................................................................................... 15 Generating positive examples ................................................................................................................................. 15 Generating negative examples ................................................................................................................................ 18 Training a stage in the cascade............................................................................................................................... 20 Training the cascade ............................................................................................................................................... 21 The final face detector............................................................................................................................................. 24 A simple comparison ............................................................................................................................................... 27 Discussion ............................................................................................................................................................... 29 FUTURE WORK ............................................................................................................................................................. 31 CONCLUSION ............................................................................................................................................................... 32 APPENDIX 1 LITERATURE LIST AND REFERENCES...................................................................................... 33 APPENDIX 2 CONTENTS OF THE ENCLOSED DVD ......................................................................................... 34 APPENDIX 3 IMAGE 2, 3 AND 4 AFTER DETECTION....................................................................................... 35",
"title": ""
},
{
"docid": "58061318f47a2b96367fe3e8f3cd1fce",
"text": "The growth of lymphatic vessels (lymphangiogenesis) is actively involved in a number of pathological processes including tissue inflammation and tumor dissemination but is insufficient in patients suffering from lymphedema, a debilitating condition characterized by chronic tissue edema and impaired immunity. The recent explosion of knowledge on the molecular mechanisms governing lymphangiogenesis provides new possibilities to treat these diseases.",
"title": ""
},
{
"docid": "aa98236ba9b9468b4780a3c8be27b62c",
"text": "The final goal of Interpretable Semantic Textual Similarity (iSTS) is to build systems that explain which are the differences and commonalities between two sentences. The task adds an explanatory level on top of STS, formalized as an alignment between the chunks in the two input sentences, indicating the relation and similarity score of each alignment. The task provides train and test data on three datasets: news headlines, image captions and student answers. It attracted nine teams, totaling 20 runs. All datasets and the annotation guideline are freely available1",
"title": ""
},
{
"docid": "4723129771fb19967d6e55c5e2bcf3e1",
"text": "The semantic interpretation of images can benefit from representations of useful concepts and the links between them as ontologies. In this paper, we propose an ontology of spatial relations, in order to guide image interpretation and the recognition of the structures it contains using structural information on the spatial arrangement of these structures. As an original theoretical contribution, this ontology is then enriched by fuzzy representations of concepts, which define their semantics, and allow establishing the link between these concepts (which are often expressed in linguistic terms) and the information that can be extracted from images. This contributes to reducing the semantic gap and it constitutes a new methodological approach to guide semantic image interpretation. This methodological approach is illustrated on a medical example, dealing with knowledge-based recognition of brain structures in 3D magnetic resonance images using the proposed fuzzy spatial relation ontology. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "18c56e9d096ba4ea48a0579626f83edc",
"text": "PURPOSE\nThe purpose of this study was to provide an overview of platelet-rich plasma (PRP) injected into the scalp for the management of androgenic alopecia.\n\n\nMATERIALS AND METHODS\nA literature review was performed to evaluate the benefits of PRP in androgenic alopecia.\n\n\nRESULTS\nHair restoration has been increasing. PRP's main components of platelet-derived growth factor, transforming growth factor, and vascular endothelial growth factor have the potential to stimulate hard and soft tissue wound healing. In general, PRP showed a benefit on patients with androgenic alopecia, including increased hair density and quality. Currently, different PRP preparations are being used with no standard technique.\n\n\nCONCLUSION\nThis review found beneficial effects of PRP on androgenic alopecia. However, more rigorous study designs, including larger samples, quantitative measurements of effect, and longer follow-up periods, are needed to solidify the utility of PRP for treating patients with androgenic alopecia.",
"title": ""
},
{
"docid": "64e2b73e8a2d12a1f0bbd7d07fccba72",
"text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.",
"title": ""
},
{
"docid": "c06bfd970592c62f952fa98289f9e3b9",
"text": "This paper proposes a new inequality-based criterion/constraint with its algorithmic and computational details for obstacle avoidance of redundant robot manipulators. By incorporating such a dynamically updated inequality constraint and the joint physical constraints (such as joint-angle limits and joint-velocity limits), a novel minimum-velocity-norm (MVN) scheme is presented and investigated for robotic redundancy resolution. The resultant obstacle-avoidance MVN scheme resolved at the joint-velocity level is further reformulated as a general quadratic program (QP). Two QP solvers, i.e., a simplified primal-dual neural network based on linear variational inequalities (LVI) and an LVI-based numerical algorithm, are developed and applied for online solution of the QP problem as well as the inequality-based obstacle-avoidance MVN scheme. Simulative results that are based on PA10 robot manipulator and a six-link planar robot manipulator in the presence of window-shaped and point obstacles demonstrate the efficacy and superiority of the proposed obstacle-avoidance MVN scheme. Moreover, experimental results of the proposed MVN scheme implemented on the practical six-link planar robot manipulator substantiate the physical realizability and effectiveness of such a scheme for obstacle avoidance of redundant robot manipulator.",
"title": ""
},
{
"docid": "d9daeb451c69b7eeab8ef00a8ea6af05",
"text": "This paper describes the effectiveness of knowledge distillation using teacher student training for building accurate and compact neural networks. We show that with knowledge distillation, information from multiple acoustic models like very deep VGG networks and Long Short-Term Memory (LSTM) models can be used to train standard convolutional neural network (CNN) acoustic models for a variety of systems requiring a quick turnaround. We examine two strategies to leverage multiple teacher labels for training student models. In the first technique, the weights of the student model are updated by switching teacher labels at the minibatch level. In the second method, student models are trained on multiple streams of information from various teacher distributions via data augmentation. We show that standard CNN acoustic models can achieve comparable recognition accuracy with much smaller number of model parameters compared to teacher VGG and LSTM acoustic models. Additionally we also investigate the effectiveness of using broadband teacher labels as privileged knowledge for training better narrowband acoustic models within this framework. We show the benefit of this simple technique by training narrowband student models with broadband teacher soft labels on the Aurora 4 task.",
"title": ""
},
{
"docid": "2579cb11b9d451d6017ebb642d6a35cb",
"text": "The presence of bots has been felt in many aspects of social media. Twitter, one example of social media, has especially felt the impact, with bots accounting for a large portion of its users. These bots have been used for malicious tasks such as spreading false information about political candidates and inflating the perceived popularity of celebrities. Furthermore, these bots can change the results of common analyses performed on social media. It is important that researchers and practitioners have tools in their arsenal to remove them. Approaches exist to remove bots, however they focus on precision to evaluate their model at the cost of recall. This means that while these approaches are almost always correct in the bots they delete, they ultimately delete very few, thus many bots remain. We propose a model which increases the recall in detecting bots, allowing a researcher to delete more bots. We evaluate our model on two real-world social media datasets and show that our detection algorithm removes more bots from a dataset than current approaches.",
"title": ""
},
{
"docid": "95ff1a86eedad42b0d869cca0d7d6e33",
"text": "360° videos give viewers a spherical view and immersive experience of surroundings. However, one challenge of watching 360° videos is continuously focusing and re-focusing intended targets. To address this challenge, we developed two Focus Assistance techniques: Auto Pilot (directly bringing viewers to the target), and Visual Guidance (indicating the direction of the target). We conducted an experiment to measure viewers' video-watching experience and discomfort using these techniques and obtained their qualitative feedback. We showed that: 1) Focus Assistance improved ease of focus. 2) Focus Assistance techniques have specificity to video content. 3) Participants' preference of and experience with Focus Assistance depended not only on individual difference but also on their goal of watching the video. 4) Factors such as view-moving-distance, salience of the intended target and guidance, and language comprehension affected participants' video-watching experience. Based on these findings, we provide design implications for better 360° video focus assistance.",
"title": ""
},
{
"docid": "0885f805c8a5226642c28904b5df6818",
"text": "Blind people need some aid to feel safe while moving. Smart stick comes as a proposed solution to improve the mobility of both blind and visually impaired people. Stick solution use different technologies like ultrasonic, infrared and laser but they still have drawbacks. In this paper we propose, light weight, cheap, user friendly, fast response and low power consumption, smart stick based on infrared technology. A pair of infrared sensors can detect stair-cases and other obstacles presence in the user path, within a range of two meters. The experimental results achieve good accuracy and the stick is able to detect all of obstacles.",
"title": ""
}
] |
scidocsrr
|
48bca357490b39bf6df44ebe16bb7579
|
RETracer: Triaging Crashes by Reverse Execution from Partial Memory Dumps
|
[
{
"docid": "09aa131819a67f8569ca4dba27ce207d",
"text": "A widely shared belief in the software engineering community is that stack traces are much sought after by developers to support them in debugging. But limited empirical evidence is available to confirm the value of stack traces to developers. In this paper, we seek to provide such evidence by conducting an empirical study on the usage of stack traces by developers from the ECLIPSE project. Our results provide strong evidence to this effect and also throws light on some of the patterns in bug fixing using stack traces. We expect the findings of our study to further emphasize the importance of adding stack traces to bug reports and that in the future, software vendors will provide more support in their products to help general users make such information available when filing bug reports.",
"title": ""
}
] |
[
{
"docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b",
"text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.",
"title": ""
},
{
"docid": "5e7a87078f92b7ce145e24a2e7340f1b",
"text": "Unsupervised artificial neural networks are now considered as a likely alternative to classical computing models in many application domains. For example, recent neural models defined by neuro-scientists exhibit interesting properties for an execution in embedded and autonomous systems: distributed computing, unsupervised learning, self-adaptation, self-organisation, tolerance. But these properties only emerge from large scale and fully connected neural maps that result in intensive computation coupled with high synaptic communications. We are interested in deploying these powerful models in the embedded context of an autonomous bio-inspired robot learning its environment in realtime. So we study in this paper in what extent these complex models can be simplified and deployed in hardware accelerators compatible with an embedded integration. Thus we propose a Neural Processing Unit designed as a programmable accelerator implementing recent equations close to self-organizing maps and neural fields. The proposed architecture is validated on FPGA devices and compared to state of the art solutions. The trade-off proposed by this dedicated but programmable neural processing unit allows to achieve significant improvements and makes our architecture adapted to many embedded systems.",
"title": ""
},
{
"docid": "e022d5b292d391e201d15e8b2317bc30",
"text": "This article describes the most prominent approaches to apply artificial intelligence technologies to information retrieval (IR). Information retrieval is a key technology for knowledge management. It deals with the search for information and the representation, storage and organization of knowledge. Information retrieval is concerned with search processes in which a user needs to identify a subset of information which is relevant for his information need within a large amount of knowledge. The information seeker formulates a query trying to describe his information need. The query is compared to document representations which were extracted during an indexing phase. The representations of documents and queries are typically matched by a similarity function such as the Cosine. The most similar documents are presented to the users who can evaluate the relevance with respect to their problem (Belkin, 2000). The problem to properly represent documents and to match imprecise representations has soon led to the application of techniques developed within Artificial Intelligence to information retrieval.",
"title": ""
},
{
"docid": "21511302800cd18d21dbc410bec3cbb2",
"text": "We investigate theoretical and practical aspects of the design of far-field RF power extraction systems consisting of antennas, impedance matching networks and rectifiers. Fundamental physical relationships that link the operating bandwidth and range are related to technology dependent quantities like threshold voltage and parasitic capacitances. This allows us to design efficient planar antennas, coupled resonator impedance matching networks and low-power rectifiers in standard CMOS technologies (0.5-mum and 0.18-mum) and accurately predict their performance. Experimental results from a prototype power extraction system that operates around 950 MHz and integrates these components together are presented. Our measured RF power-up threshold (in 0.18-mum, at 1 muW load) was 6 muWplusmn10%, closely matching the predicted value of 5.2 muW.",
"title": ""
},
{
"docid": "f7121b434ae326469780f300256367a8",
"text": "Aerial Manipulators (AMs) are a special class of underactuated mechanical systems formed by the join of Unmanned Aerial Vehicles (UAVs) and manipulators. A thorough analysis of the dynamics and a fully constructive controller design for a quadrotor plus n-link manipulator in a free-motion on an arbitrary plane is provided, via the lDA-PBC methodology. A controller is designed with the manipulator locked at any position ensuring global asymptotic stability in an open set and avoiding the AM goes upside down (autonomous). The major result of stability/robustness arises when it is proved that, additionally, the controller guarantees the boundedness of the trajectories for bounded movements of the manipulator, i.e. the robot manipulator executing planned tasks, giving rise to a non-autonomous port-controlled Hamiltonian system in closed loop. Moreover, all trajectories converge to a positive limit set, a strong result for matching-type controllers.",
"title": ""
},
{
"docid": "c72940e6154fa31f6bedca17336f8a94",
"text": "Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.",
"title": ""
},
{
"docid": "99a874fd9545649f517eb2a949a9b934",
"text": "Sensor miniaturisation, improved battery technology and the availability of low-cost yet advanced Unmanned Aerial Vehicles (UAV) have provided new opportunities for environmental remote sensing. The UAV provides a platform for close-range aerial photography. Detailed imagery captured from micro-UAV can produce dense point clouds using multi-view stereopsis (MVS) techniques combining photogrammetry and computer vision. This study applies MVS techniques to imagery acquired from a multi-rotor micro-UAV of a natural coastal site in southeastern Tasmania, Australia. A very dense point cloud (<1–3 cm point spacing) is produced in an arbitrary coordinate system using full resolution imagery, whereas other studies usually downsample the original imagery. The point cloud is sparse in areas of complex vegetation and where surfaces have a homogeneous texture. Ground control points collected with Differential Global Positioning System (DGPS) are identified and used for georeferencing via a Helmert transformation. This study compared georeferenced point clouds to a Total Station survey in order to assess and quantify their geometric accuracy. The results indicate that a georeferenced point cloud accurate to 25–40 mm can be obtained from imagery acquired from ∼50 m. UAV-based image capture provides the spatial and temporal resolution required to map and monitor natural landscapes. This paper assesses the accuracy of the generated point clouds based on field survey points. Based on our key findings we conclude that sub-decimetre terrain change (in this case coastal erosion) can be monitored. Remote Sens. 2012, 4 1574",
"title": ""
},
{
"docid": "d159ddace8c8d33963a304e04484aeff",
"text": "This work addresses the problem of semantic scene understanding under fog. Although marked progress has been made in semantic scene understanding, it is mainly concentrated on clear-weather scenes. Extending semantic segmentation methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both labeled synthetic foggy data and unlabeled real foggy data. The method is based on the fact that the results of semantic segmentation in moderately adverse conditions (light fog) can be bootstrapped to solve the same problem in highly adverse conditions (dense fog). CMAda is extensible to other adverse conditions and provides a new paradigm for learning with synthetic data and unlabeled real data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) a novel fog densification method to densify the fog in real foggy scenes without known depth; and 4) the Foggy Zurich dataset comprising 3808 real foggy images, with pixel-level semantic annotations for 40 images under dense fog. Our experiments show that 1) our fog simulation and fog density estimator outperform their state-of-theart counterparts with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly, benefiting both from our synthetic and real foggy data. The datasets and code are available at the project website. D. Dai · C. Sakaridis · S. Hecker · L. Van Gool ETH Zürich, Zurich, Switzerland L. Van Gool KU Leuven, Leuven, Belgium",
"title": ""
},
{
"docid": "9027d974a3bb5c48c1d8f3103e6035d6",
"text": "The creation of memories about real-life episodes requires rapid neuronal changes that may appear after a single occurrence of an event. How is such demand met by neurons in the medial temporal lobe (MTL), which plays a fundamental role in episodic memory formation? We recorded the activity of MTL neurons in neurosurgical patients while they learned new associations. Pairs of unrelated pictures, one of a person and another of a place, were used to construct a meaningful association modeling the episodic memory of meeting a person in a particular place. We found that a large proportion of responsive MTL neurons expanded their selectivity to encode these specific associations within a few trials: cells initially responsive to one picture started firing to the associated one but not to others. Our results provide a plausible neural substrate for the inception of associations, which are crucial for the formation of episodic memories.",
"title": ""
},
{
"docid": "c0db65ce1428099d5bfb00071d820096",
"text": "With the rise of soft robotics technology and applications, there have been increasing interests in the development of controllers appropriate for their particular design. Being fundamentally different from traditional rigid robots, there is still not a unified framework for the design, analysis, and control of these high-dimensional robots. This review article attempts to provide an insight into various controllers developed for continuum/soft robots as a guideline for future applications in the soft robotics field. A comprehensive assessment of various control strategies and an insight into the future areas of research in this field are presented.",
"title": ""
},
{
"docid": "700b1a3fd913d2980f87def5540938f1",
"text": "Foursquare is an online social network and can be represented with a bipartite network of users and venues. A user-venue pair is connected if a user has checked-in at that venue. In the case of Foursquare, network analysis techniques can be used to enhance the user experience. One such technique is link prediction, which can be used to build a personalized recommendation system of venues. Recommendation systems in bipartite networks are very often designed using the global ranking method and collaborative filtering. A less known methodnetwork based inference is also a feasible choice for link prediction in bipartite networks and sometimes performs better than the previous two. In this paper we test these techniques on the Foursquare network. The best technique proves to be the network based inference. We also show that taking into account the available metadata can be beneficial.",
"title": ""
},
{
"docid": "ea0b94e3ad27603d45f56de039c39388",
"text": "Recent work on generative text modeling has found that variational autoencoders (VAE) with LSTM decoders perform worse than simpler LSTM language models (Bowman et al., 2015). This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. In this paper, we experiment with a new type of decoder for VAE: a dilated CNN. By changing the decoder’s dilation architecture, we control the size of context from previously generated words. In experiments, we find that there is a trade-off between contextual capacity of the decoder and effective use of encoding information. We show that when carefully managed, VAEs can outperform LSTM language models. We demonstrate perplexity gains on two datasets, representing the first positive language modeling result with VAE. Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines.",
"title": ""
},
{
"docid": "4cc4a6644e367afacee006fdb9f5e68a",
"text": "A lifetime optimization methodology for planning the inspection and repair of structures that deteriorate over time is introduced and illustrated through numerical examples. The optimization is based on minimizing the expected total life-cycle cost while maintaining an allowable lifetime reliability for the structure. This method incorporates: (a) the quality of inspection techniques with different detection capabilities; (b) all repair possibilities based on an event tree; (c) the effects of aging, deterior~ti~m: an~ subsequent. r~~air on structural reliability; and (d) the time value of money. The overall cost to be minimized Includes the initial cost and the costs of preventive maintenance, inspection, repair, and failure. The methodology is illustrated using the reinforced concrete T-girders from a highway bridge. An optimum inspection/repair strategy is deve~~ped for these girders that are deteriorating due to corrosion in an aggressive environment. The effect of cntlcal. pa rameters such as rate of corrosion, quality of the inspection technique, and the expected cost of structural fallure are all investigated, along with the effects of both uniform and nonuniform inspection time intervals. Ultimately, the reliability-based lifetime approach to developing an optimum inspection/repair strategy demonstrates the potential for cost savings and improved efficiency. INTRODUCTION The management of the nation's infrastructure is a vitally important function of government. The inspection and repair of the transportation network is needed for uninterrupted com merce and a functioning economy. With about 600,000 high way bridges in the national inventory, the maintenance of these structures alone represents a commitment of billions of dollars annually. In fact, the nation spends at least $5,000,000,000 per year for highway bridge design, construction, replacement, and rehabilitation (Status 1993). Given this huge investment along with an increasing scarcity of resources, it is essential that the funds be used as efficiently as possible. Highway bridges deteriorate over time and need mainte nance/inspection programs that detect damage, deterioration, loss of effective strength in members, missing fasteners, frac tures, and cracks. Bridge serviceability is highly dependent on the frequency and quality of these maintenance programs. Be cause the welfare of many people depends on the health of the highway system, it is important that these bridges be main tained and inspected routinely. An efficient bridge maintenance program requires careful planning base~ on potenti~ modes of failure of the structural elements, the history of major struc tural repairs done to the bridge, and, of course, the frequency and intensity of the applied loads. Effective maintenal1;ce/in spection can extend the life expectancy of a system while re ducing the possibility of costly failures in the future. In any bridge, there are many defects that may appear dur ing a projected service period, such as potholes in the deck, scour on the piers, or the deterioration of joints or bearings. Corrosion of steel reinforcement, initiated by high chloride concentrations in the concrete, is a serious cause of degrada tion in concrete structures (Ting 1989). The corrosion damage is revealed by the initiation and propagation of cracks, which can be detected and repaired by scheduled maintenance and inspection procedures. As a result, the reliability of corrosive 'Prof., Dept. of Civ., Envir., and Arch. Engrg., Univ. of Colorado, Boulder, CO 80309-0428. 'Proj. Mgr., Chung-Shen Inst. of Sci. and Techno\\., Taiwan, Republic of China; formerly, Grad. Student, Dept. of Civ., Envir., and Arch. Engrg., Univ. of Colorado, Boulder, CO. 'Grad. Siudent, Dept. of Civ., Envir., and Arch. Engrg., Univ. of Col orado. Boulder, CO. critical structures depends not only on the structural design, but also on the inspection and repair procedures. This paper proposes a method to optimize the lifetime inspection/repair strategy of corrosion-critical concrete struc tures based on the reliability of the structure and cost-effec tiveness. The method is applicable for any type of damage whose evolution can be modeled over time. The reliability based analysis of structures, with or without maintenance/in spection procedures, is attracting the increased attention of re searchers (Thoft-Christensen and Sr6rensen 1987; Mori and Ellingwood 1994a). The optimal lifetime inspection/repair strategy is obtained by minimizing the expected total life-cycle cost while satisfying the constraints on the allowable level of structural lifetime reliability in service. The expected total life cycle cost includes the initial cost and the costs of preventive maintenance, inspection, repair, and failure. MAINTENANCEIINSPECTION For many bridges, both preventive and repair maintenance are typically performed. Preventive or routine maintenance in cludes replacing small parts, patching concrete, repairing cracks, changing lubricants, and cleaning and painting expo~ed parts. The structure is kept in working condition by delaymg and mitigating the aging effects of wear, fatigue, and related phenomena. In contrast, repair maintenance m~gh~ inclu~e re placing a bearing, resurfacing a deck, or modlfymg. a girder. Repair maintenance tends to be less frequent, reqUlres more effort, is usually more costly, and results in a measurable in crease in reliability. A sample maintenance strategy is shown in Fig. 1, where T l , T2 , T3 , and T4 represent the times of repair maintenance, and effort is a generic quantity that reflects cost, amount of work performed, and benefit derived from the main tenance. While guidance for routine maintenance exists, many repair maintenance strategies are based on experience and local ~rac tice rather than on sound theoretical investigations. Mamte-",
"title": ""
},
{
"docid": "b0382aa0f8c8171b78dba1c179554450",
"text": "This paper is concerned with the hard thresholding operator which sets all but the k largest absolute elements of a vector to zero. We establish a tight bound to quantitatively characterize the deviation of the thresholded solution from a given signal. Our theoretical result is universal in the sense that it holds for all choices of parameters, and the underlying analysis depends only on fundamental arguments in mathematical optimization. We discuss the implications for two domains: Compressed Sensing. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter for a vast volume of hard thresholding based algorithms, which renders an improvement on the RIP condition especially when the true sparsity is unknown. This suggests that in essence, many more kinds of sensing matrices or fewer measurements are admissible for the data acquisition procedure. Machine Learning. In terms of large-scale machine learning, a significant yet challenging problem is learning accurate sparse models in an efficient manner. In stark contrast to prior work that attempted the `1-relaxation for promoting sparsity, we present a novel stochastic algorithm which performs hard thresholding in each iteration, hence ensuring such parsimonious solutions. Equipped with the developed bound, we prove the global linear convergence for a number of prevalent statistical models under mild assumptions, even though the problem turns out to be non-convex.",
"title": ""
},
{
"docid": "a7c37a5ee66fb2db6288a6314bdea78f",
"text": "Radial, space-filling visualizations can be useful for depi cting information hierarchies, but they suffer from one major problem. As the hierarchy grows in size, ma ny items become small, peripheral slices that are difficult to distinguish. We have developed t hree visualization/interaction techniques that provide flexible browsing of the display. The technique s allow viewers to examine the small items in detail while providing context within the entire in formation hierarchy. Additionally, smooth transitions between views help users maintain orientation within the complete information space.",
"title": ""
},
{
"docid": "10512cddabf509100205cb241f2f206a",
"text": "Due to an increasing growth of Internet usage, cybercrimes has been increasing at an Alarming rate and has become most profitable criminal activity. Botnet is an emerging threat to the cyber security and existence of Command and Control Server(C&C Server) makes it very dangerous attack as compare to all other malware attacks. Botnet is a network of compromised machines which are remotely controlled by bot master to do various malicious activities with the help of command and control server and n-number of slave machines called bots. The main motive behind botnet is Identity theft, Denial of Service attack, Click fraud, Phishing and many other malware activities. Botnets rely on different protocols such as IRC, HTTP and P2P for transmission. Different botnet detection techniques have been proposed in recent years. This paper discusses Botnet, Botnet history, and life cycle of Botnet apart from classifying various Botnet detection techniques. Paper highlights the recent research work under botnets in cyber realm and proposes directions for future research in this area.",
"title": ""
},
{
"docid": "83ac82ef100fdf648a5214a50d163fe3",
"text": "We consider the problem of multi-robot taskallocation when robots have to deal with uncertain utility estimates. Typically an allocation is performed to maximize expected utility; we consider a means for measuring the robustness of a given optimal allocation when robots have some measure of the uncertainty (e.g., a probability distribution, or moments of such distributions). We introduce a new O(n) algorithm, the Interval Hungarian algorithm, that extends the classic KuhnMunkres Hungarian algorithm to compute the maximum interval of deviation (for each entry in the assignment matrix) which will retain the same optimal assignment. This provides an efficient measurement of the tolerance of the allocation to the uncertainties, for both a specific interval and a set of interrelated intervals. We conduct experiments both in simulation and with physical robots to validate the approach and to gain insight into the effect of location uncertainty on allocations for multi-robot multi-target navigation tasks.",
"title": ""
},
{
"docid": "cc05dca89bf1e3f53cf7995e547ac238",
"text": "Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic image labelling. Our paper has several contributions: We show how random forests can be augmented with structured label information and be used to deliver structured low-level predictions. The learning task is carried out by employing a novel split function evaluation criterion that exploits the joint distribution observed in the structured label space. This allows the forest to learn typical label transitions between object classes and avoid locally implausible label configurations. We provide two approaches for integrating the structured output predictions obtained at a local level from the forest into a concise, global, semantic labelling. We integrate our new ideas also in the Hough-forest framework with the view of exploiting contextual information at the classification level to improve the performance on the task of object detection. Finally, we provide experimental evidence for the effectiveness of our approach on different tasks: Semantic image labelling on the challenging MSRCv2 and CamVid databases, reconstruction of occluded handwritten Chinese characters on the Kaist database and pedestrian detection on the TU Darmstadt databases.",
"title": ""
},
{
"docid": "fb89fd2d9bf526b8bc7f1433274859a6",
"text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes",
"title": ""
},
{
"docid": "c973dc425e0af0f5253b71ae4ebd40f9",
"text": "A growing body of research on Bitcoin and other permissionless cryptocurrencies that utilize Nakamoto’s blockchain has shown that they do not easily scale to process a high throughput of transactions, or to quickly approve individual transactions; blocks must be kept small, and their creation rates must be kept low in order to allow nodes to reach consensus securely. As of today, Bitcoin processes a mere 3-7 transactions per second, and transaction confirmation takes at least several minutes. We present SPECTRE, a new protocol for the consensus core of crypto-currencies that remains secure even under high throughput and fast confirmation times. At any throughput, SPECTRE is resilient to attackers with up to 50% of the computational power (up until the limit defined by network congestion and bandwidth constraints). SPECTRE can operate at high block creation rates, which implies that its transactions confirm in mere seconds (limited mostly by the round-trip-time in the network). Key to SPECTRE’s achievements is the fact that it satisfies weaker properties than classic consensus requires. In the conventional paradigm, the order between any two transactions must be decided and agreed upon by all non-corrupt nodes. In contrast, SPECTRE only satisfies this with respect to transactions performed by honest users. We observe that in the context of money, two conflicting payments that are published concurrently could only have been created by a dishonest user, hence we can afford to delay the acceptance of such transactions without harming the usability of the system. Our framework formalizes this weaker set of requirements for a crypto-currency’s distributed ledger. We then provide a formal proof that SPECTRE satisfies these requirements.",
"title": ""
}
] |
scidocsrr
|
f733c12163b7ad9cafd560d8fe668e72
|
Extraction of Salient Sentences from Labelled Documents
|
[
{
"docid": "6eeeb343309fc24326ed42b62d5524b1",
"text": "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model’s ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.",
"title": ""
},
{
"docid": "64330f538b3d8914cbfe37565ab0d648",
"text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "7b908fa217f75f75254ccbb433818416",
"text": "This paper presents a new approach to perform the estimation of the translation model probabilities of a phrase-based statistical machine translation system. We use neural networks to directly learn the translation probability of phrase pairs using continuous representations. The system can be easily trained on the same data used to build standard phrase-based systems. We provide experimental evidence that the approach seems to be able to infer meaningful translation probabilities for phrase pairs not seen in the training data, or even predict a list of the most likely translations given a source phrase. The approach can be used to rescore n-best lists, but we also discuss an integration into the Moses decoder. A preliminary evaluation on the English/French IWSLT task achieved improvements in the BLEU score and a human analysis showed that the new model often chooses semantically better translations. Several extensions of this work are discussed.",
"title": ""
},
{
"docid": "55b9284f9997b18d3b1fad9952cd4caa",
"text": "This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few handcrafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a recent benchmark of the literature.",
"title": ""
}
] |
[
{
"docid": "480fe848464a80774e3b7963e53c09d8",
"text": "We are witnessing daily acquisition of large amounts of aerial and satellite imagery. Analysis of such large quantities of data can be helpful for many practical applications. In this letter, we present an automatic content-based analysis of aerial imagery in order to detect and mark arbitrary objects or regions in high-resolution images. For that purpose, we proposed a method for automatic object detection based on a convolutional neural network. A novel two-stage approach for network training is implemented and verified in the tasks of aerial image classification and object detection. First, we tested the proposed training approach using UCMerced data set of aerial images and achieved accuracy of approximately 98.6%. Second, the method for automatic object detection was implemented and verified. For implementation on GPGPU, a required processing time for one aerial image of size 5000 × 5000 pixels was around 30 s.",
"title": ""
},
{
"docid": "605b95e3c0448b5ce9755ce6289894d7",
"text": "Website success hinges on how credible the consumers consider the information on the website. Unless consumers believe the website's information is credible, they are not likely to be willing to act on the advice and will not develop loyalty to the website. This paper reports on how individual differences and initial website impressions affect perceptions of information credibility of an unfamiliar advice website. Results confirm that several individual difference variables and initial impression variables (perceived reputation, perceived website quality, and willingness to explore the website) play an important role in developing information credibility of an unfamiliar website, with first impressions and individual differences playing equivalent roles. The study also confirms the import of information credibility by demonstrating it positively influences perceived usefulness, perceived site risk, willingness to act on website advice, and perceived consumer loyalty toward the website.",
"title": ""
},
{
"docid": "24bd9a2f85b33b93609e03fc67e9e3a9",
"text": "With the rapid development of high-throughput technologies, researchers can sequence the whole metagenome of a microbial community sampled directly from the environment. The assignment of these metagenomic reads into different species or taxonomical classes is a vital step for metagenomic analysis, which is referred to as binning of metagenomic data. In this paper, we propose a new method TM-MCluster for binning metagenomic reads. First, we represent each metagenomic read as a set of \"k-mers\" with their frequencies occurring in the read. Then, we employ a probabilistic topic model -- the Latent Dirichlet Allocation (LDA) model to the reads, which generates a number of hidden \"topics\" such that each read can be represented by a distribution vector of the generated topics. Finally, as in the MCluster method, we apply SKWIC -- a variant of the classical K-means algorithm with automatic feature weighting mechanism to cluster these reads represented by topic distributions. Experiments show that the new method TM-MCluster outperforms major existing methods, including AbundanceBin, MetaCluster 3.0/5.0 and MCluster. This result indicates that the exploitation of topic modeling can effectively improve the binning performance of metagenomic reads.",
"title": ""
},
{
"docid": "318daea2ef9b0d7afe2cb08edcfe6025",
"text": "Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.",
"title": ""
},
{
"docid": "fc6726bddf3d70b7cb3745137f4583c1",
"text": "Maximum power point tracking (MPPT) is a very important necessity in a system of energy conversion from a renewable energy source. Many research papers have been produced with various schemes over past decades for the MPPT in photovoltaic (PV) system. This research paper inspires its motivation from the fact that the keen study of these existing techniques reveals that there is still quite a need for an absolutely generic and yet very simple MPPT controller which should have all the following traits: total independence from system's parameters, ability to reach the global maxima in minimal possible steps, the correct sense of tracking direction despite the abrupt atmospheric or parametrical changes, and finally having a very cost-effective and energy efficient hardware with the complexity no more than that of a minimal MPPT algorithm like Perturb and Observe (P&O). The MPPT controller presented in this paper is a successful attempt to fulfil all these requirements. It extends the MPPT techniques found in the recent research papers with some innovations in the control algorithm and a simplistic hardware. The simulation results confirm that the proposed MPPT controller is very fast, very efficient, very simple and low cost as compared to the contemporary ones.",
"title": ""
},
{
"docid": "39340461bb4e7352ab6af3ce10460bd7",
"text": "This paper presents an 8 bit 1.8 V 500 MSPS digital- to analog converter using 0.18mum double poly five metal CMOS technology for frequency domain applications. The proposed DAC is composed of four unit cell matrix. A novel decoding logic is used to remove the inter block code transition (IBT) glitch. The proposed DAC shows less number of switching for a monotonic input and the product of number of switching and the current value associated with switching is also less than the segmented DAC. The SPICE simulated DNL and INL is 0.1373 LSB and 0.331 LSB respectively and are better than the segmented DAC. The proposed DAC also shows better SNDR and THD than the segmented DAC. The MATLAB simulated THD, SFDR and SNDR is more than 45 dB, 35 dB and 44 dB respectively at 500MS/s with a 10 MHz input sine wave with incoherent timing response between current switches.",
"title": ""
},
{
"docid": "3d11b4b645a32ff0d269fc299e7cf646",
"text": "The static one-to-one binding of hosts to IP addresses allows adversaries to conduct thorough reconnaissance in order to discover and enumerate network assets. Specifically, this fixed address mapping allows distributed network scanners to aggregate information gathered at multiple locations over different times in order to construct an accurate and persistent view of the network. The unvarying nature of this view enables adversaries to collaboratively share and reuse their collected reconnaissance information in various stages of attack planning and execution. This paper presents a novel moving target defense (MTD) technique which enables host-to-IP binding of each destination host to vary randomly across the network based on the source identity (spatial randomization) as well as time (temporal randomization). This spatio-temporal randomization will distort attackers' view of the network by causing the collected reconnaissance information to expire as adversaries transition from one host to another or if they stay long enough in one location. Consequently, adversaries are forced to re-scan the network frequently at each location or over different time intervals. These recurring probings significantly raises the bar for the adversaries by slowing down the attack progress, while improving its detectability. We introduce three novel metrics for quantifying the effectiveness of MTD defense techniques: deterrence, deception, and detectability. Using these metrics, we perform rigorous theoretical and experimental analysis to evaluate the efficacy of this approach. These analyses show that our approach is effective in countering a significant number of sophisticated threat models including collaborative reconnaissance, worm propagation, and advanced persistent threat (APT), in an evasion-free manner.",
"title": ""
},
{
"docid": "050ca96de473a83108b5ac26f4ac4349",
"text": "The concept of graphene-based two-dimensional leaky-wave antenna (LWA), allowing both frequency tuning and beam steering in the terahertz band, is proposed in this paper. In its design, a graphene sheet is used as a tuning part of the high-impedance surface (HIS) that acts as the ground plane of such 2-D LWA. It is shown that, by adjusting the graphene conductivity, the reflection phase of the HIS can be altered effectively, thus controlling the resonant frequency of the 2-D LWA over a broad band. In addition, a flexible adjustment of its pointing direction can be achieved over a wide range, while keeping the operating frequency fixed. Transmission-line methods are used to accurately predict the antenna reconfigurable characteristics, which are further verified by means of commercial full-wave analysis tools.",
"title": ""
},
{
"docid": "4535a5961d6628f2f4bafb1d99821bbb",
"text": "The prevalence of diabetes has dramatically increased worldwide due to the vast increase in the obesity rate. Diabetic nephropathy is one of the major complications of type 1 and type 2 diabetes and it is currently the leading cause of end-stage renal disease. Hyperglycemia is the driving force for the development of diabetic nephropathy. It is well known that hyperglycemia increases the production of free radicals resulting in oxidative stress. While increases in oxidative stress have been shown to contribute to the development and progression of diabetic nephropathy, the mechanisms by which this occurs are still being investigated. Historically, diabetes was not thought to be an immune disease; however, there is increasing evidence supporting a role for inflammation in type 1 and type 2 diabetes. Inflammatory cells, cytokines, and profibrotic growth factors including transforming growth factor-β (TGF-β), monocyte chemoattractant protein-1 (MCP-1), connective tissue growth factor (CTGF), tumor necrosis factor-α (TNF-α), interleukin-1 (IL-1), interleukin-6 (IL-6), interleukin-18 (IL-18), and cell adhesion molecules (CAMs) have all been implicated in the pathogenesis of diabetic nephropathy via increased vascular inflammation and fibrosis. The stimulus for the increase in inflammation in diabetes is still under investigation; however, reactive oxygen species are a primary candidate. Thus, targeting oxidative stress-inflammatory cytokine signaling could improve therapeutic options for diabetic nephropathy. The current review will focus on understanding the relationship between oxidative stress and inflammatory cytokines in diabetic nephropathy to help elucidate the question of which comes first in the progression of diabetic nephropathy, oxidative stress, or inflammation.",
"title": ""
},
{
"docid": "ef4ea289a20a833df9495f7bbe8d337f",
"text": "Plant growth and development are adversely affected by salinity – a major environmental stress that limits agricultural production. This chapter provides an overview of the physiological mechanisms by which growth and development of crop plants are affected by salinity. The initial phase of growth reduction is due to an osmotic effect, is similar to the initial response to water stress and shows little genotypic differences. The second, slower effect is the result of salt toxicity in leaves. In the second phase a salt sensitive species or genotype differs from a more salt tolerant one by its inability to prevent salt accumulation in leaves to toxic levels. Most crop plants are salt tolerant at germination but salt sensitive during emergence and vegetative development. Root and shoot growth is inhibited by salinity; however, supplemental Ca partly alleviates the growth inhibition. The Ca effect appears related to the maintenance of plasma membrane selectivity for K over Na. Reproductive development is considered less sensitive to salt stress than vegetative growth, although in wheat salt stress can hasten reproductive growth, inhibit spike development and decrease the yield potential, whereas in the more salt sensitive rice, low yield is primarily associated with reduction in tillers, and by sterile spikelets in some cultivars. Plants with improved salt tolerance must thrive under saline field conditions with numerous additional stresses. Salinity shows interactions with several stresses, among others with boron toxicity, but the mechanisms of salinity-boron interactions are still poorly known. To better understand crop tolerance under saline field conditions, future research should focus on tolerance of crops to a combination of stresses",
"title": ""
},
{
"docid": "45c917e024842ff7e087e4c46a05be25",
"text": "A centrifugal pump that employs a bearingless motor with 5-axis active control has been developed. In this paper, a novel bearingless canned motor pump is proposed, and differences from the conventional structure are explained. A key difference between the proposed and conventional bearingless canned motor pumps is the use of passive magnetic bearings; in the proposed pump, the amount of permanent magnets (PMs) is reduced by 30% and the length of the rotor is shortened. Despite the decrease in the total volume of PMs, the proposed structure can generate large suspension forces and high torque compared with the conventional design by the use of the passive magnetic bearings. In addition, levitation and rotation experiments demonstrated that the proposed motor is suitable for use as a bearingless canned motor pump.",
"title": ""
},
{
"docid": "7368671d20b4f4b30a231d364eb501bc",
"text": "In this article, we study the problem of Web user profiling, which is aimed at finding, extracting, and fusing the “semantic”-based user profile from the Web. Previously, Web user profiling was often undertaken by creating a list of keywords for the user, which is (sometimes even highly) insufficient for main applications. This article formalizes the profiling problem as several subtasks: profile extraction, profile integration, and user interest discovery. We propose a combination approach to deal with the profiling tasks. Specifically, we employ a classification model to identify relevant documents for a user from the Web and propose a Tree-Structured Conditional Random Fields (TCRF) to extract the profile information from the identified documents; we propose a unified probabilistic model to deal with the name ambiguity problem (several users with the same name) when integrating the profile information extracted from different sources; finally, we use a probabilistic topic model to model the extracted user profiles, and construct the user interest model. Experimental results on an online system show that the combination approach to different profiling tasks clearly outperforms several baseline methods. The extracted profiles have been applied to expert finding, an important application on the Web. Experiments show that the accuracy of expert finding can be improved (ranging from +6% to +26% in terms of MAP) by taking advantage of the profiles.",
"title": ""
},
{
"docid": "f174469e907b60cd481da6b42bafa5f9",
"text": "A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC/Java. To infer suitable ESC/Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC/Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.",
"title": ""
},
{
"docid": "aa32c46e8d2c5daf2f126b8c5d8b9223",
"text": "We demonstrate the application of advanced 3D visualization techniques to determine the optimal implant design and position in hip joint replacement planning. Our methods take as input the physiological stress distribution inside a patient's bone under load and the stress distribution inside this bone under the same load after a simulated replacement surgery. The visualization aims at showing principal stress directions and magnitudes, as well as differences in both distributions. By visualizing changes of normal and shear stresses with respect to the principal stress directions of the physiological state, a comparative analysis of the physiological stress distribution and the stress distribution with implant is provided, and the implant parameters that most closely replicate the physiological stress state in order to avoid stress shielding can be determined. Our method combines volume rendering for the visualization of stress magnitudes with the tracing of short line segments for the visualization of stress directions. To improve depth perception, transparent, shaded, and antialiased lines are rendered in correct visibility order, and they are attenuated by the volume rendering. We use a focus+context approach to visually guide the user to relevant regions in the data, and to support a detailed stress analysis in these regions while preserving spatial context information. Since all of our techniques have been realized on the GPU, they can immediately react to changes in the simulated stress tensor field and thus provide an effective means for optimal implant selection and positioning in a computational steering environment.",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "78c477aeb6a27cf5b4de028c0ecd7b43",
"text": "This paper addresses the problem of speaker clustering in telephone conversations. Recently, a new clustering algorithm named affinity propagation (AP) is proposed. It exhibits fast execution speed and finds clusters with low error. However, AP is an unsupervised approach which may make the resulting number of clusters different from the actual one. This deteriorates the speaker purity dramatically. This paper proposes a modified method named supervised affinity propagation (SAP), which automatically reruns the AP procedure to make the final number of clusters converge to the specified number. Experiments are carried out to compare SAP with traditional k-means and agglomerative hierarchical clustering on 4-hour summed channel conversations in the NIST 2004 Speaker Recognition Evaluation. Experiment results show that the SAP method leads to a noticeable speaker purity improvement with slight cluster purity decrease compared with AP.",
"title": ""
},
{
"docid": "edfaa4259def05daba17f71ffafac407",
"text": "Access control is one of the most important security mechanisms in cloud computing. Attributed based encryption provides an approach that allows data owners to integrate data access policies within the encrypted data. However, little work has been done to explore flexible authorization in specifying the data user's privileges and enforcing the data owner's policy in cloud based environments. In this paper, we propose a hierarchical attribute based access control scheme by extending ciphertext-policy attribute-based encryption (CP-ABE) with a hierarchical structure of multiauthorities and exploiting attribute-based signature (ABS). The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits fine-grained access control with authentication in supporting write privilege on outsourced data in cloud computing. In addition, we decouple the task of policy management from security enforcement by using the extensible access control markup language (XACML) framework. Extensive analysis shows that our scheme is both efficient and scalable in dealing with access control for outsourced data in cloud computing.",
"title": ""
},
{
"docid": "af9768101a634ab57eb2554953ef63ec",
"text": "Very recently, there has been a perfect storm of technical advances that has culminated in the emergence of a new interaction modality: on-body interfaces. Such systems enable the wearer to use their body as an input and output platform with interactive graphics. Projects such as PALMbit and Skinput sought to answer the initial and fundamental question: whether or not on-body interfaces were technologically possible. Although considerable technical work remains, we believe it is important to begin shifting the question away from how and what, and towards where, and ultimately why. These are the class of questions that inform the design of next generation systems. To better understand and explore this expansive space, we employed a mixed-methods research process involving more than two thousand individuals. This started with high-resolution, but low-detail crowdsourced data. We then combined this with rich, expert interviews, exploring aspects ranging from aesthetics to kinesthetics. The results of this complimentary, structured exploration, point the way towards more comfortable, efficacious, and enjoyable on-body user experiences.",
"title": ""
},
{
"docid": "3b6cef052cd7a7acc765b44292af51cc",
"text": "Minimizing travel time is critical for the successful operation of emergency vehicles. Preemption can significantly help emergency vehicles reach the intended destination faster. Majority of the current studies focus on minimizing and/or eliminating delays for EVs and do not consider the negative impacts of preemption on urban traffic. One primary negative impact is extended delays for non-EV traffic due to preemption that is addressed in this paper. We propose an Adaptive Preemption of Traffic (APT) system for Emergency Vehicles in an Intelligent Transportation System. We utilize the knowledge of current traffic conditions in the transportation system to adaptively preempt traffic at signals along the path of EVs so as to minimize, if not eliminate stopped delays for EVs while simultaneously minimizing the delays for non-emergency vehicles in the system. Through extensive simulation results, we show substantial reduction in delays for both EVs.",
"title": ""
}
] |
scidocsrr
|
6ea36dc0ba7e14014f9921d1a3804b11
|
An interior-point stochastic approximation method and an L1-regularized delta rule
|
[
{
"docid": "00c03b12344e91c22d0ddcb370c5d993",
"text": "TREC’s Spam Filtering Track (Cormack & Lynam, 2005) introduces a standard testing framework that is designed to model a spam filter’s usage as closely as possible, to measure quantities that reflect the filter’s effectiveness for its intended purpose, and to yield repeatable (i.e. controlled and statistically valid) results. The TREC Spam Filter Evaluation Toolkit is free software that, given a corpus and a filter, automatically runs the filter on each message in the corpus, compares the result to the gold standard for the corpus, and reports effectiveness measures with 95% confidence limits. The corpus consists of a chronological sequence of email messages, and a gold standard judgement for each message. We are concerned here with the creation of appropriate corpora for use with the toolkit.",
"title": ""
}
] |
[
{
"docid": "83d50f7c66b14116bfa627600ded28d6",
"text": "Diet can affect cognitive ability and behaviour in children and adolescents. Nutrient composition and meal pattern can exert immediate or long-term, beneficial or adverse effects. Beneficial effects mainly result from the correction of poor nutritional status. For example, thiamin treatment reverses aggressiveness in thiamin-deficient adolescents. Deleterious behavioural effects have been suggested; for example, sucrose and additives were once suspected to induce hyperactivity, but these effects have not been confirmed by rigorous investigations. In spite of potent biological mechanisms that protect brain activity from disruption, some cognitive functions appear sensitive to short-term variations of fuel (glucose) availability in certain brain areas. A glucose load, for example, acutely facilitates mental performance, particularly on demanding, long-duration tasks. The mechanism of this often described effect is not entirely clear. One aspect of diet that has elicited much research in young people is the intake/omission of breakfast. This has obvious relevance to school performance. While effects are inconsistent in well-nourished children, breakfast omission deteriorates mental performance in malnourished children. Even intelligence scores can be improved by micronutrient supplementation in children and adolescents with very poor dietary status. Overall, the literature suggests that good regular dietary habits are the best way to ensure optimal mental and behavioural performance at all times. Then, it remains controversial whether additional benefit can be gained from acute dietary manipulations. In contrast, children and adolescents with poor nutritional status are exposed to alterations of mental and/or behavioural functions that can be corrected, to a certain extent, by dietary measures.",
"title": ""
},
{
"docid": "1e18be7d7e121aa899c96cbcf5ea906b",
"text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1",
"title": ""
},
{
"docid": "fcf0e3049dffd74064aa936028fa71e4",
"text": "For the design and test of functional modules of an automated vehicle, it is essential to define interfaces. While interfaces on the perception side, like object lists, point clouds or occupancy grids, are to a certain degree settled already, they are quite vague in the consecutive steps of context modeling and in particular on the side of driving execution. The authors consider the scene as the central interface between perception and behavior planning & control. Within the behavior planning & control block, a situation is a central data container. A scenario is a common approach to substantiate test cases for functional modules and can be used to detail the functional description of a system. However, definitions of these terms are often-at best-vague or even contradictory. This paper will review these definitions and come up with a consistent definition for each term. Moreover, we present an example for the implementation of each of these interfaces.",
"title": ""
},
{
"docid": "3db1505c98ecb39ad11374d1a7a13ca3",
"text": "Distributed Denial-of-Service (DDoS) attacks are usually launched through the botnet, an “army” of compromised nodes hidden in the network. Inferential tools for DDoS mitigation should accordingly enable an early and reliable discrimination of the normal users from the compromised ones. Unfortunately, the recent emergence of attacks performed at the application layer has multiplied the number of possibilities that a botnet can exploit to conceal its malicious activities. New challenges arise, which cannot be addressed by simply borrowing the tools that have been successfully applied so far to earlier DDoS paradigms. In this paper, we offer basically three contributions: 1) we introduce an abstract model for the aforementioned class of attacks, where the botnet emulates normal traffic by continually learning admissible patterns from the environment; 2) we devise an inference algorithm that is shown to provide a consistent (i.e., converging to the true solution as time elapses) estimate of the botnet possibly hidden in the network; and 3) we verify the validity of the proposed inferential strategy on a test-bed environment. Our tests show that, for several scenarios of implementation, the proposed botnet identification algorithm needs an observation time in the order of (or even less than) 1 min to identify correctly almost all bots, without affecting the normal users’ activity.",
"title": ""
},
{
"docid": "35c70b84c44a9c6f14de2941059b0e21",
"text": "An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses.",
"title": ""
},
{
"docid": "0e91b49f051f960d8f4c7786f2bdc257",
"text": "The importance of measuring the performance of e-government cannot be overemphasized. In this paper, a flexible framework is suggested to choose an appropriate strategy to measure the tangible and intangible benefits of e-government. An Indian case study of NDMC (New Delhi Municipal Corporation) has been taken up for analysis and placement into the framework. The results obtained suggest that to have a proper evaluation of tangible and intangible benefits of e-government, the projects should be in a mature stage with proper information systems in place. All of the e-government projects in India are still in a nascent stage; hence, proper information flow for calculating 'return on e-government' considering tangible and intangible benefits cannot be fully ascertained.",
"title": ""
},
{
"docid": "ac07f85a8d6114061569e043e19747f5",
"text": "In this paper, some novel and modified driving techniques for a single switch zero voltage switching (ZVS) topology are introduced. These medium/high frequency and digitally synthesized driving techniques can be applied to decrease the dangers of peak currents that may damage the switching circuit when switching in out of nominal conditions. The technique is fully described and evaluated experimentally in a 2500W prototype intended for a domestic induction cooking application.",
"title": ""
},
{
"docid": "be68f44aca9f8c88c2757a6910d7e5a5",
"text": "Creative computational systems have often been largescale endeavors, based on elaborate models of creativity and sometimes featuring an accumulation of heuristics and numerous subsystems. An argument is presented for facilitating the exploration of creativity through small-scale systems, which can be more transparent, reusable, focused, and easily generalized across domains and languages. These systems retain the ability, however, to model important aspects of aesthetic and creative processes. Examples of extremely simple story generators are presented along with their implications for larger-scale systems. A case study focuses on a system that implements the simplest possible model of ellipsis.",
"title": ""
},
{
"docid": "8174a4a425dc7f097be101a8461268a0",
"text": "One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.",
"title": ""
},
{
"docid": "fb0c1c324be810386a25cec8eae6f37c",
"text": "vector distribution, a new four-step search (4SS) algorithm with center-biased checking point pattern for fast block motion estimation is proposed in this paper. Halfway-stop technique is employed in the new algorithm with searching steps of 2 to 4 and the total number of checking points is varied from 17 to 27. Simulation results show that the proposed 4SS performs better than the well-known three-step search and has similar performance to the new three-step search (N3SS) in terms of motion compensation errors. In addition, the 4SS also reduces the worst-case computational requirement from 33 to 27 search points and the average computational requirement from 21 to 19 search points as compared with N3SS. _______________________________________ This paper was published in IEEE Trans. Circuits Syst. Video Technol., vol. 6, No. 3, pp. 313-317, Jun. 1996. The authors are with the CityU Image Processing Lab, Department of Electronic Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong. Email: elmpo@cityu.edu.hk",
"title": ""
},
{
"docid": "f01062b680514ce37fe029246fa30e17",
"text": "Students have different levels of motivation, different attitudes about teaching and learning, and different responses to specific classroom environments and instructional practices. The more thoroughly instructors understand the differences, the better chance they have of meeting the diverse learning needs of all of their students. Three categories of diversity that have been shown to have important implications for teaching and learning are differences in students’ learning styles (characteristic ways of taking in and processing information), approaches to learning (surface, deep, and strategic), and intellectual development levels (attitudes about the nature of knowledge and how it should be acquired and evaluated). This article reviews models that have been developed for each of these categories, outlines their pedagogical implications, and suggests areas for further study.",
"title": ""
},
{
"docid": "213f816da43e7ce43e979418e35471e6",
"text": "A novel saliency detection algorithm for video sequences based on the random walk with restart (RWR) is proposed in this paper. We adopt RWR to detect spatially and temporally salient regions. More specifically, we first find a temporal saliency distribution using the features of motion distinctiveness, temporal consistency, and abrupt change. Among them, the motion distinctiveness is derived by comparing the motion profiles of image patches. Then, we employ the temporal saliency distribution as a restarting distribution of the random walker. In addition, we design the transition probability matrix for the walker using the spatial features of intensity, color, and compactness. Finally, we estimate the spatiotemporal saliency distribution by finding the steady-state distribution of the walker. The proposed algorithm detects foreground salient objects faithfully, while suppressing cluttered backgrounds effectively, by incorporating the spatial transition matrix and the temporal restarting distribution systematically. Experimental results on various video sequences demonstrate that the proposed algorithm outperforms conventional saliency detection algorithms qualitatively and quantitatively.",
"title": ""
},
{
"docid": "049674034f41b359a7db7b3c5ba7c541",
"text": "This paper extends and contributes to emerging debates on the validation of interpretive research (IR) in management accounting. We argue that IR has the potential to produce not only subjectivist, emic understandings of actors’ meanings, but also explanations, characterised by a certain degree of ‘‘thickness”. Mobilising the key tenets of the modern philosophical theory of explanation and the notion of abduction, grounded in pragmatist epistemology, we explicate how explanations may be developed and validated, yet remaining true to the core premises of IR. We focus on the intricate relationship between two arguably central aspects of validation in IR, namely authenticity and plausibility. Working on the assumption that validation is an important, but potentially problematic concern in all serious scholarly research, we explore whether and how validation efforts are manifest in IR using two case studies as illustrative examples. Validation is seen as an issue of convincing readers of the authenticity of research findings whilst simultaneously ensuring that explanations are deemed plausible. Whilst the former is largely a matter of preserving the emic qualities of research accounts, the latter is intimately linked to the process of abductive reasoning, whereby different theories are applied to advance thick explanations. This underscores the view of validation as a process, not easily separated from the ongoing efforts of researchers to develop explanations as research projects unfold and far from reducible to mere technicalities of following pre-specified criteria presumably minimising various biases. These properties detract from a view of validation as conforming to prespecified, stable, and uniform criteria and allow IR to move beyond the ‘‘crisis of validity” arguably prevailing in the social sciences. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3ed737115f439446d1ff78c81a31f48c",
"text": "A 24-year-old pregnant woman with Marfan's syndrome delivered by cesarean section during the 38th week of gestation. Although aortic root dilatation did not increase during pregnancy, three months after delivery, the patient noticed a pulsatile abdominal mass. Aortic aneurysm was diagnosed and surgical replacement of the infrarenal abdominal aorta to the common iliac arteries and reconstruction of the inferior mesenteric artery were performed. Moreover, the patient subsequently developed a Stanford type B thoracic aortic dissection, even after more than four months of beta-blockade.",
"title": ""
},
{
"docid": "023302562ddfe48ac81943fedcf881b7",
"text": "Knitty is an interactive design system for creating knitted animals. The user designs a 3D surface model using a sketching interface. The system automatically generates a knitting pattern and then visualizes the shape of the resulting 3D animal model by applying a simple physics simulation. The user can see the resulting shape before beginning the actual knitting. The system also provides a production assistant interface for novices. The user can easily understand how to knit each stitch and what to do in each step. In a workshop for novices, we observed that even children can design their own knitted animals using our system.",
"title": ""
},
{
"docid": "713d709c14c8943638d2c80e3aeaded2",
"text": "Microfluidics-based biochips combine electronics with biology to open new application areas such as point-of-care medical diagnostics, on-chip DNA analysis, and automated drug discovery. Bioassays are mapped to microfluidic arrays using synthesis tools, and they are executed through the manipulation of sample and reagent droplets by electrical means. Most prior work on CAD for biochips has assumed independent control of electrodes using a large number of (electrical) input pins. Such solutions are not feasible for low-cost disposable biochips that are envisaged for many field applications. A more promising design strategy is to divide the microfluidic array into smaller partitions and use a small number of electrodes to control the electrodes in each partition. We propose a partitioning algorithm based on the concept of \"droplet trace\", which is extracted from the scheduling and droplet routing results produced by a synthesis tool. An efficient pin assignment method, referred to as the \"Connect-5 algorithm\", is combined with the array partitioning technique based on droplet traces. The array partitioning and pin assignment methods are evaluated using a set of multiplexed bioassays.",
"title": ""
},
{
"docid": "185f9e66a467f449d299a4fbbb69bcb9",
"text": "Social media is becoming popular for news consumption due to its fast dissemination, easy access, and low cost. However, it also enables the wide propagation of fake news, i.e., news with intentionally false information. Detecting fake news is an important task, which not only ensures users receive authentic information but also helps maintain a trustworthy news ecosystem. The majority of existing detection algorithms focus on finding clues from news contents, which are generally not effective because fake news is often intentionally written to mislead users by mimicking true news. Therefore, we need to explore auxiliary information to improve detection. The social context during news dissemination process on social media forms the inherent tri-relationship, the relationship among publishers, news pieces, and users, which has the potential to improve fake news detection. For example, partisan-biased publishers are more likely to publish fake news, and low-credible users are more likely to share fake news. In this paper, we study the novel problem of exploiting social context for fake news detection. We propose a tri-relationship embedding framework TriFN, which models publisher-news relations and user-news interactions simultaneously for fake news classification. We conduct experiments on two real-world datasets, which demonstrate that the proposed approach significantly outperforms other baseline methods for fake news detection.",
"title": ""
},
{
"docid": "0034b7f8160f504bd3de5125cf33fea6",
"text": "By taking into account simultaneously the effects of border traps and interface states, the authors model the alternating current capacitance-voltage (C-V) behavior of high-mobility substrate metal-oxide-semiconductor (MOS) capacitors. The results are validated with the experimental In0.53Ga0.47As/ high-κ and InP/high-κ (C-V) curves. The simulated C-V and conductance-voltage (G-V) curves reproduce comprehensively the experimentally measured capacitance and conductance data as a function of bias voltage and measurement frequency, over the full bias range going from accumulation to inversion and full frequency spectra from 100 Hz to 1 MHz. The interface state densities of In0.53Ga0.47As and InP MOS devices with various high-κ dielectrics, together with the corresponding border trap density inside the high-κ oxide, were derived accordingly. The derived interface state densities are consistent to those previously obtained with other measurement methods. The border traps, distributed over the thickness of the high- κ oxide, show a large peak density above the two semiconductor conduction band minima. The total density of border traps extracted is on the order of 1019 cm-3. Interface and border trap distributions for InP and In0.53Ga0.47As interfaces with high-κ oxides show remarkable similarities on an energy scale relative to the vacuum reference.",
"title": ""
},
{
"docid": "96c30be2e528098e86b84b422d5a786a",
"text": "The LSTM is a popular neural network model for modeling or analyzing the time-varying data. The main operation of LSTM is a matrix-vector multiplication and it becomes sparse (spMxV) due to the widely-accepted weight pruning in deep learning. This paper presents a new sparse matrix format, named CBSR, to maximize the inference speed of the LSTM accelerator. In the CBSR format, speed-up is achieved by balancing out the computation loads over PEs. Along with the new format, we present a simple network transformation to completely remove the hardware overhead incurred when using the CBSR format. Also, the detailed analysis on the impact of network size or the number of PEs is performed, which lacks in the prior work. The simulation results show 16∼38% improvement in the system performance compared to the well-known CSC/CSR format. The power analysis is also performed in 65nm CMOS technology to show 9∼22% energy savings.",
"title": ""
},
{
"docid": "629001050a1ad1acdfb821081bec23dc",
"text": "Ground-based radar interferometry is an increasingly popular technique for monitoring civil infrastructures. Many research groups, professionals, and companies have tested it in different operative scenarios, so it is time for a first systematic survey of the case studies reported in the literature. This review is addressed especially to the engineers and scientists interested to consider the applicability of the technique to their practice, so it is focused on the issues of the practical cases rather than on theory and principles, which are now well consolidated.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.